Culturing Science – biology as relevant to us earthly beings

Inevitability and Oil, Pt. 1: the inherent risk for accidents in complex technology

This post was chosen as an Editor's Selection for ResearchBlogging.org When I read updates on blogs or the news about the BP oil spill, my expression is generally very serious: furrowed brow, pursed lips which I’m probably chewing in alternation with gnawing a nail.  But last week I laughed out loud, a true LOL, a brash guffaw.  (“What?!” my labmates inquired.)

I had read this New York Times article recounting the reactions of the executives of other oil companies during the Congressional hearing as they attempted to assert that this sort of accident would never occur at their own companies’ wells.

“We would not have drilled the well the way they did,” said Rex W. Tillerson, chief executive of Exxon Mobil.

“It certainly appears that not all the standards that we would recommend or that we would employ were in place,” said John S. Watson, chairman of Chevron.

“It’s not a well that we would have drilled in that mechanical setup,” said Marvin E. Odum, president of Shell.

The idea that this would never happen at another deep-sea well is preposterous to me.  That the risks of drilling a mile into the ocean – to depths that require robots (yet another form of technology) for access, in order to draw back up pressurized matter from mostly unexplored pockets -  can be calculated and prepared for seems absolutely ridiculous.  And although the execs are using exact and technical language to ensure that they will never be made hypocrites, the message they are trying to send is: BP messed up.  We act more responsibly and would never have made such mistakes.  We should be allowed to continue drilling in the deep.

Many people seem ready to play the blame game, plug the whole thing on BP and call it a day.  I, however, think that this accident presents an opportunity for us to reflect upon what it means to be a society reliant on complex technologies whose failures can cause disaster.

I. A little bit of theory…

When talking about risk theory and safety, two main ideas come up in the scholarship:  Normal Accidents Theory (NAT) and High Reliability Organization Framework (HROF), which can you read about in quite thorough detail in this article from Organizational Studies.

The term “normal accidents” was coined by Charles Perrow in his 1984 book Normal Accidents: Living with High Risk Technologies (available on Google Books) to describe accidents that are not caused by a single, definite error – but are rather due to inherent problems in complex systems.  The two qualities that lead towards “normal accidents” or “system accidents” are:

  1. A system complex enough that not all outcomes can be predicted, leading to a potential situation where 2 failures could interact in an unexpected way, hiding the true cause of the problem; and
  2. The system is “tightly coupled” – meaning that processes happen very quickly, and the parts are entwined so closely that individual parts cannot be separated from one another.

These two qualities combined create a system for which there is “insufficient time and understanding to control incidents and avoid accidents,” as the Organizational Studies article states.

Perrow himself compiled this theory after the incident at Three Mile Island.  Three Mile Island was a nuclear reactor outside of Harrisburg, Pennsylvania which underwent a partial core meltdown in 1979.  In this near-disaster, two seemingly contradictory “safety devices,” meant to alert the crew of problems in the reactor, went off simultaneously, distracting the staff from the real problem: a stuck steam valve.  Luckily, an engineer put the pieces together with less than an hour to spare.  This is an example of a “normal accident” – where the complexity of the reactor, that is, the system’s “normal” existence, nearly caused disaster itself.

In reaction to Normal Accident Theory, the more optimistic High Reliability Optimization Framework was born.  It’s originators, Todd La Porte and Karlene Roberts, describe an alternate scenario, in which complex systems are able to run incredibly smoothly and without fail for long periods of time.  Citing aircraft control operations as an example, they explain that the technology is not the issue, but rather the complexity of the organization.  As long as all the people working on the ground are highly trained in both technical function of the system and safety, complex systems are not doomed to fail.

While both theories are flawed (as the article mentioned above outlines), I find the Normal Accidents Theory to be more useful.  It seems obvious that if all employees are highly trained in all areas, things would flow smoothly.  But, I’m sorry to report, that doesn’t seem to be the case for most systems and industries.  Normal Accident Theory informs a different way of looking at technology and thinking about accidents – a view revealing that there is an inherent danger, and to be slightly wary.  A useful view in terms of planning, training, and honesty.

II. Is the BP Oil Spill a “normal accident?”

The BP oil spill does not fit perfectly into the Normal Accident framework.  There were a number of specific mistakes that were made that led to the spill – at least that’s what the reports are saying for now.  (That is, it’s not “normal” unless cost-cutting and neglecting safety are considered “normal.”  It does feel that way sometimes…)  Upon hearing my initial lamenting at the onset of the spill, my father sent me this New Yorker article by Malcolm Gladwell in order to provide some “useful perspective.”  (Thanks, Dad.)  It was published in 1996 and is a reflection on a fatal (and comparable) accident that occurred 10 years prior: the Challenger explosion.

The Challenger was NASA’s second space shuttle, which underwent liftoff successfully 9 times.  However, on its 10th liftoff in 1986, it exploded just 73 seconds off the ground, killing all seven crew members.  The first 9 times, the rubber O-rings contained hot gas and kept it from ignition by rocket fire.  But the 10th time they failed.  Engineers had warned NASA that it was too cold for take-off, but the top men insisted that they stay on schedule.  Thus it was a combination of mechanical failure and human hubris that caused the Challenger exposion.

The BP oil spill is a similar case.  The hubris of man, the need to drill quickly and cheaply, led to cost-cutting and mechanical failure (as the media currently reports), resulting in a massive oil slick that will continue to grow in the months, if not the years, to come.

As I mentioned previously, I am not confident in deep-sea drilling technology.  Granted, I don’t know much about it, and the current inundation of the interwebs in oil spill opinions makes finding reliable information nearly impossible.  Maybe I’m the one being irrational here, but I just cannot see how this technology is not risky in and of itself.  I am not confident in the other oil company executives,  claiming that their systems are not flawed.  While BP’s spill was not a “normal accident,” it does not preclude other rigs from having them.

This is why all the finger-pointing at BP irks me.  They made some serious mistakes and will pay the consequences – I’m not letting them off the hook.  But by having an easy scapegoat, we, the public, can easily ignore the greater issues at hand such as the inherent risk for disaster in these complex systems, or the fact that we’re drilling a mile deep into the ocean floor for fuel in the first place.  It’s too easy to make this accident out to be a huge mistake made by greedy corporate white men instead of contemplating that fact that this could have happened just through the nature of the system.

In his book Inviting Disaster: Lessons from the Edge of Technology, James Chiles writes:

A lot of us are offering our lives these days to machines and their operators, about which we know very little except that occasionally things go shockingly wrong… Later study shows that machine disasters nearly always require multiple failures and mistakes to reach fruition.  One mishap, a single cause, is hardly ever enough to constitute a disaster.  A disaster occurs through a combination of poor maintenance, bad communication, and shortcuts.  Slowly the strain builds.

We are all human.  We all know what it’s like to procrastinate, to forget to leave a message, to have our minds wander.  In his book, Chiles argues, citing over 50 examples in immense detail, that most disasters are caused by “ordinary mistakes” – and that to live in this modern world, we have to “acknowledge the extraordinary damage that ordinary mistakes can now cause.”  Most of the time, things run smoothly.  But when they don’t, our culture requires us to find someone to blame instead of recognizing that our own lifestyles cause these disasters.  Instead of reconsidering the way we live our lives, we simply dump our frustration off so that we can continue living our lives in comfort.

It is too easy to ignore the fact that the risk of disaster comes with technology, especially ones that incorporate a form of energy such as nuclear power, rocket fuel, or, here, the potential energy of pressurized crude oil.

III. Prospective: incorporating Normal Accident Theory into our culture

At the beginning of his New Yorker article, Gladwell outlines the “ritual to disaster:”  the careful exposition of the problems that went wrong, the governmental panel, the pointed fingers.  Rereading it a month after I first received it, I can see this ritual unfolding before me.  It occurs on the premise that we can learn from our mistakes – that the pinpointing of the precise events that led to disaster can help us avoid repeating ourselves.  But Gladwell asks: “What if these public post mortems don’t help us avoid future accidents? … [Perhaps they] are as much exercises in self-deception as they are genuine opportunities for reassurance.”

If Chiles and Perrow are right – if risk and thus potential accident are built into the nature of complex machinery run by humans – we should not be reassured.  We can certainly learn from our mistakes and try to keep replicate disasters from occurring.  But, as Chiles points out, if all concern is thrown into the one part of the system that has been harmed before, it will only leave other parts to corrode and rust without our notice.

What would it mean for us to “accept” that our technology is flawed, that “normal accidents” will occur?  It would not lessen the impact of disasters.  But if an acceptable discourse could be developed to address inherent risk in machines without striking fear into the masses, if the topic were no longer untouchable or taboo, we could better prepare for “normal accidents.”  For while industries mostly employ specialists these days, in these accidents (or near-accidents), the answer comes instead from large-scale thinking.  Chiles describes it as a game of chess in which “a chess master spends more time thinking about the board from his opponent’s perspective than from his own.”

We have to combine our risk assessment theories – we have to aim for the optimistic High Reliability Optimization Framework, trying to turn as many people on the team into “chess masters” as possible, without getting overconfident.  Although “normal accidents” cannot be predicted, the HROF should include training in what a “normal accident” is.  Even the mere knowledge that the machinery may not always act the way its supposed to is better than nothing.

But for now, the disaster ritual will continue, just as it did with the Challenger and other disasters.  BP will take the blame and foot the bill.  In several months or years, there will be a public apology and ceremony to remember the 11 rig workers who died.  And the President will announce: We have learned our lesson from the BP spill.  We will not make this mistake again.  Deep-sea drilling is reopened, we are reborn.  “Your loss has meant that we could confidently begin anew,” as Captain Frederick Hauck said of the Challenger in 1988.

There are other fundamental differences between the BP oil spill and the other man-made disasters: its expanse in both space and time.  The Challenger explosion, while a great tragedy, was swift.  There were no long-term effects felt by the general public (excepting the families of the astronauts).  But this spill is far from over.  By ignoring the inherent risks in deep-sea drilling, we are potentially setting ourselves up for another long-term disaster, affecting millions of people, wildlife, ecosystems.  I don’t think we can afford a repeat.

ResearchBlogging.org Chiles, James R.  Inviting Disaster: Lessons from the Edge of Technology.  New York: Harper Collins, 2002.

Gephart, R. (2004). Normal Risk: Technology, Sense Making, and Environmental Disasters Organization & Environment, 17 (1), 20-26 DOI: 10.1177/1086026603262030

Gladwell, Malcolm.  1996.  “Blowup.” The New Yorker.  Jan 22, 36.

Leveson, N., Dulac, N., Marais, K., & Carroll, J. (2009). Moving Beyond Normal Accidents and High Reliability Organizations: A Systems Approach to Safety in Complex Systems Organization Studies, 30 (2-3), 227-249 DOI: 10.1177/0170840608101478

Perrow, Charles.  Normal Accidents: Living with high-risk technologies.  Princeton, NJ: Princeton University Press, 1984.

Weick, K. (2004). Normal Accident Theory As Frame, Link, and Provocation Organization & Environment, 17 (1), 27-31 DOI: 10.1177/1086026603262031

Written by Hanner

June 22, 2010 at 11:21 am

Posted in Journal Article, News

Tagged with

25 Responses

Subscribe to comments with RSS.

  1. What an awesome and insightful post. One of the best that I’ve read on the oil spill so far.
    I completely agree with you that pointing out a scapegoat is not helping the underlying problems at all. In my ideal world, after the BP oil spill the Western World would have paused to overthink its current reliance on oil and the entire logistical operation that has become necessary to fuel this reliance. Only then will I say, if that is the choice we want to make and if these are the risks we want to take, so be it.

    Also interesting in this regard is a recent Nature paper: http://www.nature.com/nature/journal/v464/n7291/full/nature08932.html

    It describes how networks themselves can be robust (the electrical grid, the interent), but that increasing connectiveness between these networks can lead to hidden fragilities of the system, which could lead to catastrophical collapse of both networks.

    Lucas

    June 22, 2010 at 2:39 pm

    • WORLD: I have an announcement to make. An official one.

      LUCAS BROUWERS HAS THE ENTIRETY OF SCIENCE INSIDE HIS BRAIN. He can access any paper at any time, and suggest the references that make you say, “D’oh! Wish I had talked to Lucas about this!”

      To be honest, my brain goes all a-scramble when it comes to most modeling. But I will try to make it through all of that. It looks to be a perfect, researched addition to what I’m trying to say here.

      Re: reconsideration of our reliance – It’s a conversation that’s brewing in comment threads and which will certainly be played out in major media outlets. Whether anything will change? I doubt it, but I tend towards pessimism regarding large-scale actions of our species. We may already be in too deep.

      Thanks for your comments. Made me feel warm n fuzzy.

      Hannah

      June 22, 2010 at 6:52 pm

      • Hahaha, thanks for that awesome ego-boosting announcement, but you’re really overestimating both my brain capacity! The paper was there in my head somewhere, but I had to google quite a bit before I found it again. “nature 2010 networks fragility” did the trick.. Without google as my external associative memory system, I would be nowhere ;).

        On change and reliance: I have some hope Obama and other world leaders can leverage the outrage over the spill to bring about some much needed climate reform. I don’t care whether politicians have political, climatological or environmental reasons, getting of the oil addiction is a good thing..

        Lucas

        June 24, 2010 at 2:57 am

  2. Oh, thank goodness someone else is thinking about this along these terms. A lot of folks are talking about this in terms of wholesale prevention (and rightly so), but as someone who went through the levee failures in New Orleans back in 2005 and is now watching this unfold, we hardly think of disaster containment in terms of planning for quick and directed response and recovery. Poor disaster response only compounds and becomes part of the greater disaster itself.

    As I said on my blog today:

    “The way to prevent this from happening again is known. There are folks opposed to and for offshore drilling who say that we can never prevent a recurrence and therefore we should stop drilling or continue to drill, respectively. But, this was no mere accident. An accident happens when you follow all the rules of the road and external, heretofore-unknown circumstances conspire against you. In this case, the driver didn’t have the seatbelt on, the tires were under-inflated, the brakes were non-operational but no one had bothered to check them and the car was driven anyway even after passengers expressed concern and asked for the handover of keys. (Hey, if folks in the industry are going to liken this ongoing disaster to a car accident or plane crash, you can bet I will run miles with the metaphor.) So, this much is absolutely preventable.

    “What about the rest? As commenter Blair, who incidentally is a [former NASA] rocket scientist, said in a comment to a previous post, ‘It costs to do fault tree analysis and establish contingency plans, but the cost of NOT planning is getting too high. I worry that governments are reactive in nature and will never get ahead of the situation. Government CAN require industry to have plans in place before they proceed with potentially risky activity.’ You cannot prevent lightning from striking the collection ship thus halting oil recovery for a while. That is a legitimate accident. But to not anticipate and plan for any critical component of the operation failing due to human oversight or acts of god, even and especially in the recovery phase, shows that neither BP nor the government has learned philosophically much from the initial disaster and it’s going to take a lot more than six months and a drilling moratorium to fix systemic breakdown.”

    Having worked for major oil companies in the past (and not at all inclined to do so ever again), there is some truth to Exxon, Chevron and Shell’s statements that they wouldn’t have drilled that well. BP engineering isn’t ridiculed within the industry for nothing. But, those always make for famous last words.

    Maitri

    June 23, 2010 at 12:07 pm

  3. Great post, and thanks for including these references.

    The thing that bothers me about this whole event is how the industry and gov’t are both saying things like “We’ll investigate what went wrong and make sure it doesn’t happen again.” — that’s all fine and good, we should certainly investigate what went wrong. But, it would be refreshing if someone had the guts to say: “No matter how much technology and R&D we throw at this we can NEVER guarantee it won’t happen again. The chance for catastrophic failure will always be >0%.”
    The big disconnect that many people have pointed out is that the technology for deep-water drilling is cutting edge but the technology for responding to a deep-water blowout is ineffective to put it nicely. In addition to doing our best to prevent a future blowout we should acknowledge that failure is a possibility and invest $$ and effort into response technology.

    Brian Romans

    June 23, 2010 at 10:17 pm

  4. >Engineers had warned NASA that it was too cold for take-off, but the top men insisted that they stay on schedule. Thus it was a combination of mechanical failure and human hubris that caused the Challenger explosion.

    First off I need to say that I am guessing about what I have to say. I am more than willing to listen to facts if any are available.

    My guess is that on every launch at least a few engineers warned that their subsystems were not at acceptable levels of adequacy. Thus engineers’ concerns were ignored as if they were part of a huge game of calling wolf.

    This would take some of the blame for complex system misbehavior away from managers’ hubris. Does this put back blame for their inability to manage an impossibly complex system?

    For all the sound bytes I conclude that risk management has to also include the risk of being timid to the point of paralysis. Sometimes one simply must guess. Sometimes guesses go horribly wrong.

    Complex systems WILL fail. Ability to recover is right up there with doing ones best not to fail.

    Peter B

    June 26, 2010 at 2:31 pm

  5. [...] Inevitability and Oil, Pt. 1: the inherent risk for accidents in complex technology. We all know that BP screwed up big time in the Gulf, but are we learning the wrong lessons from the accident by making them the only villain?  Hannah at Culturing Science discusses the disaster in terms of commonly-used theories of risk. [...]

  6. As long as there is money to be made deep drilling will continue. What we don’t know and actually can’t know is what costs such an accident really have on the marine ecology and life. All those ‘estimates’ done aside, we’ve never stopped any damaging act for our environment just due to the damage made.

    As long as profit and shareholders steer we will continue exploit a shrinking world. and only when disaster stare us in the face do we stop to discuss it. I’m not impressed by PB, neither am i impressed of any other profit making organization, all oil companies included.

    We spew out around thirty percent of all natural gas f.ex before we even start to get some use from it, just in ‘natural’ leaks. And that’s methane folks, a really heat binding gas.

    Outside Australia And outside Russia (EastEurope) we go down to try to exploit the frozen methane deposits, ignoring the very real fact of underwater landslides that will free those deposits as ‘harmless bubbles’ for those few observing them.

    We don’t need a new ‘risk analysis’, we need a inventory of what we still have left in form of natural assets, and then we need to make some real educated guesses as to what we can allow our selves in form of exploatation. This whole sh* reminds me all too much of the ‘Emperors new clothes’ where no one would admit to him being naked, if you now know that reference.

    There are no really acceptable risks any more. This world is becoming all to small, and our resources all to dwindling.

    So do I expect anything like this to happen?
    Nope, just more BS coming in form of Copenhagen deals etc. Ad we’re all in on it, all refusing to see, no one excluded. We’re leaving a depleted planet to our off springs and we will say “we didn’t see.” and we didn’t. We kept our eyes real closed the whole time.

    Yoron

    June 29, 2010 at 1:35 pm

  7. [...] How probable was the Gulf oil spill? Where BP a bunch of cowboys, or could it have happened to any of the big oil companies working in the deep waters of the Gulf of Mexico? This article looks at how accidents in general happen, with an aim to answering this question. If you are responsible for anything it will make valuable reading, as it compares and contrasts a number of famous catastrophes, and picks out some of the common mistakes that led to them. It also puts the question of broader culpability into better focus. This is perhaps the most interesting post on the oilspill debate so far, and I recommend that everyone reads it! From Culturing Science, by Hannah Waters June 22 2010 [...]

  8. interesting write up, fair and even handed in a time when people aren’t feeling particularly fair and even handed toward BP.

    sometimes i wonder how much better/more responsible, companies would be if they weren’t publicly traded.

    would BP have cut those corners if they didn’t have so many anonymous shareholders receiving e-mails of their monthly dividends?

    auto parts

    July 8, 2010 at 3:50 pm

  9. [...] Science has two very well thought out post on Inevitability and Oil (part 1 and part 2). I agree with most of what she is saying, especially this quote from part 2: While [...]

  10. [...] dump our frustration off so that we can continue living our lives in comfort (Waters, 2010).” Click to read more of this blog post ▶ No Responses /* 0) { jQuery('#comments').show('', change_location()); [...]

  11. Great post Hannah. Most scary to me after any disaster or even minor accident is when I hear people say “it wouldn’t happen here” or “I would never do that”. The identification of a villain pretty much stops people’s interest in looking for future failures. Another good reference for this topic is David Marx’s Whack-a-Mole: The Price We Pay For Expecting Perfection.

  12. [...] Never thought I’d actually get around to a Pt. 2, eh?  Well, I’ve shown you!  Here’s the first part: Inevitability and Oil, Pt. 1: the inherent risk for accidents in complex technology [...]

  13. [...] Culturing Science: Octopuses doing tricks on the internet and our search for non-human ‘intelligence’ Culturing Science: Why Scientists Should Read Science Fiction Culturing Science: Microbe biogeography: the distribution, dispersal and evolution of the littlest organisms Culturing Science: Marine Snow: dead organisms and poop as manna in the ocean Culturing Science: Inevitability and Oil, Pt. 1: the inherent risk for accidents in complex technology [...]

  14. [...] Culturing Science: Octopuses doing tricks on the internet and our search for non-human ‘intelligence’ Culturing Science: Why Scientists Should Read Science Fiction Culturing Science: Microbe biogeography: the distribution, dispersal and evolution of the littlest organisms Culturing Science: Marine Snow: dead organisms and poop as manna in the ocean Culturing Science: Inevitability and Oil, Pt. 1: the inherent risk for accidents in complex technology [...]

  15. [...] Culturing Science: Marine Snow: dead organisms and poop as manna in the ocean Culturing Science: Inevitability and Oil, Pt. 1: the inherent risk for accidents in complex technology Culturing Science: Developing a scientific worldview: why it’s hard and what we can [...]

  16. [...] Culturing Science: Marine Snow: dead organisms and poop as manna in the ocean Culturing Science: Inevitability and Oil, Pt. 1: the inherent risk for accidents in complex technology Culturing Science: Developing a scientific worldview: why it’s hard and what we can [...]

  17. [...] Culturing Science: Marine Snow: dead organisms and poop as manna in the ocean Culturing Science: Inevitability and Oil, Pt. 1: the inherent risk for accidents in complex technology Culturing Science: Developing a scientific worldview: why it’s hard and what we can [...]

  18. [...] Culturing Science: Marine Snow: dead organisms and poop as manna in the ocean Culturing Science: Inevitability and Oil, Pt. 1: the inherent risk for accidents in complex technology Culturing Science: Developing a scientific worldview: why it’s hard and what we can [...]

  19. [...] Culturing Science: Marine Snow: dead organisms and poop as manna in the ocean Culturing Science: Inevitability and Oil, Pt. 1: the inherent risk for accidents in complex technology Culturing Science: Developing a scientific worldview: why it’s hard and what we can [...]

  20. [...] Culturing Science: Marine Snow: dead organisms and poop as manna in the ocean Culturing Science: Inevitability and Oil, Pt. 1: the inherent risk for accidents in complex technology Culturing Science: Developing a scientific worldview: why it’s hard and what we can [...]

  21. His last name is Perrow. Charles Perrow. Not Farrow. Just an edit.

    John Taylor

    December 2, 2010 at 3:37 pm

    • Thanks! Somehow I managed to cite him correctly, but spell his name wrong the ENTIRE post. Appreciate the tip!

      Hannah Waters

      December 2, 2010 at 3:42 pm

  22. [...] Inevitability and Oil: the inherent risk for accidents in complex technology [...]


Comments are closed.

Follow

Get every new post delivered to your Inbox.

Join 53 other followers

%d bloggers like this: