Normalisation of Deviance

Normalisation of Deviance: A Hidden Threat in Modern Workplaces

In an ideal world, every organization strives for excellence by strictly adhering to established standards, protocols, and procedures. But in reality, many industries, from aerospace to healthcare, grapple with a creeping issue known as the « normalisation of deviance ». This insidious phenomenon occurs when individuals or groups within an organization gradually accept lower standards of performance or safety as the norm—often without recognizing the risks this poses until a disaster strikes.

Credit NASA – Unsplash

What Is Normalisation of Deviance?

The term normalisation of deviance was coined by sociologist Diane Vaughan while studying the 1986 Challenger space shuttle disaster. Vaughan described how NASA engineers, over time, became desensitized to small deviations from safety standards. As these small errors or risks continued without immediate consequences, they were gradually accepted as routine. Eventually, this complacency led to a catastrophic event.

Normalisation of deviance occurs when deviations from established norms, rules, or best practices become normalized because « nothing bad has happened yet. » Over time, people and organizations lose sight of why the rules existed in the first place, and risky behaviors or decisions become the standard operating procedure.

How Does It Happen?

The process of normalising deviance often follows a subtle and gradual trajectory. It can happen due to various reasons:

  1. Pressure to Perform: Whether it’s meeting a deadline, reducing costs, or boosting productivity, employees or management may feel pressure to take shortcuts. If the shortcut works without immediate negative consequences, it can quickly become normalized.
  2. Desensitization: After seeing small deviations go unpunished or unnoticed, people become desensitized. If « cutting corners » doesn’t result in failure right away, the behavior may become more frequent and accepted.
  3. Cultural Drift: Over time, organizations may experience a cultural drift where old practices and guidelines are informally replaced by new, riskier behaviors. In this environment, individuals may assume that « this is how things are done now » without questioning the reasons behind the change.
  4. Groupthink: When teams collectively normalize deviant behaviors, it becomes harder for individuals to speak out. In such cases, employees may believe that if others are accepting the behavior, it must be okay.

6 Real-World Examples of Normalisation of Deviance

The NASA Challenger disaster is a prime example of how the normalisation of deviance can lead to catastrophic failures. Along with accidents like Chernobyl, the Boeing 737 MAX crashes, the Skyguide mid-air collision and the BP Deepwater Horizon Oil Spill, the Challenger tragedy demonstrates how gradual acceptance of risky behavior can have dire consequences.

1. NASA Challenger Disaster (1986):

On January 28, 1986, the space shuttle Challenger disintegrated 73 seconds after liftoff, resulting in the deaths of all seven crew members. The cause of the disaster was the failure of an O-ring seal on one of the solid rocket boosters, which allowed hot gases to escape and ultimately led to the explosion.

The seeds of this disaster were sown long before the actual launch. Engineers at NASA and its contractor, Morton Thiokol, had been aware of the O-ring issue for years. The O-rings were designed to seal the joints of the solid rocket boosters, but under certain low-temperature conditions, they were prone to becoming brittle and failing. Despite these known concerns, previous flights had been launched successfully without catastrophic consequences, leading to a growing tolerance of this design flaw.

This gradual acceptance of risk is a textbook example of the normalisation of deviance. NASA and Morton Thiokol allowed launches to proceed, despite knowing the potential for O-ring failure. The logic followed that because no serious accidents had occurred previously, the risks were minimal. As a result, NASA’s leadership proceeded with the Challenger launch on a bitterly cold January morning, even though engineers had expressed concerns about the temperature’s effect on the O-rings.

The Challenger disaster illustrates how small, seemingly inconsequential deviations from safety protocols can become normalized over time. In this case, the complacency surrounding the O-ring issue became fatal when the right set of conditions—low temperature and the resulting brittleness—finally triggered a disaster.

Normalisation of deviance is not limited to space travel; it is prevalent in industries as diverse as healthcare, aviation, manufacturing, and finance. Some famous examples include:

2. Chernobyl Disaster (1986):

The Chernobyl nuclear disaster is one of the most infamous examples of normalisation of deviance. On April 26, 1986, an explosion occurred at the Chernobyl Nuclear Power Plant in Ukraine, releasing large amounts of radioactive material into the environment. This catastrophic event was the result of numerous safety violations, technical design flaws, and poor decision-making.

At the heart of the disaster was a routine safety test that required disabling critical safety systems. The test had been postponed earlier in the day, yet plant operators decided to proceed during the night shift, despite a lack of expertise and familiarity among the team. In the years leading up to the event, Chernobyl’s staff had grown accustomed to bypassing safety protocols and overriding alarms, as no immediate consequences had arisen from earlier deviations. This gradual erosion of safety culture—combined with a flawed reactor design—led to a catastrophic nuclear meltdown that impacted millions of lives across Europe.

The disaster at Chernobyl illustrates how consistent small deviations from safety norms—without repercussions—can lull workers into believing the risks are minimal. In reality, these accumulated deviations created a perfect storm for failure, leading to one of the worst nuclear disasters in history.

4. Skyguide Mid-Air Collision (Überlingen, 2002):

On the night of July 1, 2002, two planes—a Russian passenger aircraft and a DHL cargo plane—collided mid-air over the town of Überlingen, Germany. The accident claimed the lives of all 71 people on board, including many children. The incident was a direct consequence of a series of failures within Skyguide, the Swiss air traffic control organization responsible for monitoring the airspace at the time.

In the lead-up to the disaster, Skyguide had normalized several deviant practices. On the night of the accident, a single air traffic controller was left in charge of multiple airspaces while his colleague was on a break, a deviation from the standard practice of having at least two controllers on duty. Additionally, essential safety systems, including phone links to adjacent control centers, were under maintenance and not fully operational. These actions were seen as acceptable because no immediate negative consequences had occurred in prior instances when such deviations were made.

As a result, the overburdened controller was slow to notice the impending collision. When he did, his instructions conflicted with the automated TCAS system onboard the aircraft, further confusing the pilots. The Russian pilot, following the controller’s instruction instead of TCAS, descended into the path of the DHL cargo plane, leading to the tragic collision.

The Überlingen crash is a stark reminder of how deviations from safety standards, even when done in the interest of efficiency or convenience, can become normalized over time. In this case, the normalization of reduced staffing levels and inadequate maintenance of critical systems directly contributed to the loss of 71 lives.

3. Boeing 737 MAX Accidents (2018 and 2019):

The tragic crashes of two Boeing 737 MAX aircraft, Lion Air Flight 610 in 2018 and Ethiopian Airlines Flight 302 in 2019, also stemmed from the normalisation of deviance. In both accidents, a faulty system known as the Maneuvering Characteristics Augmentation System (MCAS) triggered an uncontrollable downward pitch, resulting in the loss of 346 lives.

At the core of these crashes was Boeing’s decision to implement MCAS software to counterbalance the plane’s altered design. However, the system had significant flaws that pilots were not adequately trained to manage. Boeing downplayed the risks associated with MCAS to accelerate certification processes and minimize pilot training costs. Additionally, deviations in safety oversight practices were normalized as regulatory bodies, such as the FAA, began delegating more responsibilities to Boeing itself.

Over time, Boeing’s willingness to bypass key safety reviews, in conjunction with regulators’ growing complacency, became normalized, contributing to the tragedy. The crashes served as a wake-up call, highlighting how a long-standing culture of shortcuts and risk minimization can lead to deadly outcomes.

5. BP Deepwater Horizon Oil Spill (2010):

The Deepwater Horizon oil spill occurred on April 20, 2010, when an explosion on the offshore oil rig, operated by BP, caused a blowout, releasing millions of barrels of crude oil into the Gulf of Mexico. This disaster is considered the largest marine oil spill in history, and it caused significant environmental, economic, and human losses.

The root cause of the accident was the failure of a blowout preventer (BOP), a critical safety device designed to seal the well in the event of a pressure surge. However, the factors leading to the explosion had developed over time due to a pattern of cost-cutting, rushed decisions, and safety compromises that had become normalized within BP’s operations.

  1. Cutting Corners for Speed: BP, under significant financial pressure, was eager to complete drilling ahead of schedule. Throughout the project, several warning signs were ignored, including issues with cementing the well and failed pressure tests. Despite these red flags, BP and its contractors (Transocean and Halliburton) chose to proceed, dismissing safety concerns to avoid further delays and costs.
  2. Normalization of High-Risk Practices: BP had a history of prioritizing speed and cost savings over rigorous safety protocols. Decisions like using fewer centralizers in the well, or skipping quality control steps in testing the cement seal, were examples of normalized deviant behavior—small risks that were deemed acceptable because previous shortcuts had not resulted in immediate disaster.
  3. Complacency in Safety Culture: The lack of immediate negative consequences from these risky decisions over time contributed to a sense of complacency within BP and its partners. This complacency, combined with a culture that rewarded speed and cost savings, resulted in a failure to recognize the growing cumulative risk until it was too late.

When the blowout preventer failed, the well erupted, causing an explosion that killed 11 workers and initiated an oil spill that would continue for 87 days. The environmental impact was catastrophic, with long-term damage to marine ecosystems and significant economic repercussions for industries reliant on the Gulf.

Why Is It Dangerous?

The Skyguide mid-air collision, along with disasters like Chernobyl, the Boeing 737 MAX crashes and BP Deepwater Horizon Oil Spill highlight the grave risks associated with the normalisation of deviance. These incidents underscore the dangers of becoming complacent with deviations from safety protocols:

1. Erosion of Safety:

In high-risk industries like aviation or healthcare, small deviations from protocol can have deadly consequences. Over time, each small slip compounds the risks, increasing the likelihood of an accident.

2. Complacency:

As deviant behaviors become routine, organizations become complacent, believing that if they’ve gotten away with it before, they can continue to do so. This can lead to a culture of risk acceptance, where safety concerns are downplayed or ignored.

3. Decreased Accountability:

Once deviant behavior is normalized, it becomes harder to hold individuals accountable for following established protocols. This can lead to a breakdown in organizational discipline, making it harder to enforce standards when they’re most needed.

How to Combat Normalisation of Deviance

Preventing the normalisation of deviance requires constant vigilance, strong leadership, and a culture that values adherence to standards. Here are some key strategies to prevent it:

1. Foster a Culture of Accountability:

Organizations must prioritize accountability at all levels. Employees should be empowered to speak up when they see deviations from standards and encouraged to question decisions that seem to violate established protocols.

2. Reinforce the Importance of Protocols:

Leaders must consistently remind teams why protocols exist in the first place. It’s essential to make the connection between rules and safety, productivity, or quality clear to all employees.

3. Regular Audits and Reviews:

Frequent audits and reviews of processes can help identify deviations before they become normalized. Periodic reassessment of procedures ensures that organizations stay on track and that any drift is corrected before it becomes a problem.

4. Open Communication Channels:

Create safe and open channels for employees to report concerns or potential risks. If employees feel they can raise issues without fear of reprisal, they are more likely to flag deviations early on.

5. Lead by Example:

Leadership must demonstrate adherence to standards and protocols themselves. When managers and leaders cut corners, they signal to the rest of the organization that it’s acceptable behavior, further normalizing deviance.

Conclusion

The normalisation of deviance is a silent but pervasive threat in any organization. While it may seem harmless in the short term, over time, small deviations from established norms can snowball into major failures or disasters. By fostering a culture of accountability, maintaining vigilance, and reinforcing the importance of protocols, organizations can protect themselves from the risks associated with normalising deviance. After all, in high-stakes environments, it’s often the small, overlooked details that determine success or failure.

Commentez !