Archive for the ‘News’ Category
The RSS feed guys over at Sciam took the day off so our links were broken AND wrong. D’oh!
Here’s the correct RSS feed link! Update your feeds, people.
Thanks to all who sent me kind words throughout the day. I’m reenergized and ready to take back what’s mine — i.e. the internet.
I’m pleased to announce that Culturing Science has a new home — the brand spanking new Scientific American blog network! It’s a lovely group with many of my favorite science bloggers and you can find us all in the same place.
What does this mean for CS? Not much. I will post more because I am contractually obligated, but also because I am now a part of this wonderful community and I want to engage with them. I’m going to finish up zombie blogging, continue the history of science series, maybe even finish up some even older projects that I never got around to.
Switch over your RSS feeds here!
I am thrilled and hope you’ll join me at Sciam.
I’m not a physicist and, as such, would appreciate comments/emails alerting me to any errors! And yes, I did feel like I had to write this whole thing up before approaching radiation ecology. Welcome to my brain.
White Sands, New Mexico, 1954. An FBI agent, a police sergeant, and two scientists venture into a sandstorm, goggles pressed to their eyes, with one goal in mind: to find an odd footprint that they suspect is connected to recent area deaths. The scientists exchange knowing glances when they find it. “Something incredible has happened in this desert,” Dr. Graham says.
A minute later, the air swells with alien screams and the blinding sand clears to reveal their source: a giant ant, nearly 8 feet long. The agent and sergeant shoot frantically — “get the antennae! get the other antenna!” Graham yells. With its senses disabled, the ant collapses into machine gun fire. The team looks on in wonder: what is this thing and where did it come from? “It appears to be from the family Formicidae: an ant,” says Graham. “A fantastic mutation probably caused by lingering radiation from the first atomic bomb.”
This is a scene from the 1954 film Them!, one of many horror movies inspired by the first atomic bomb tests in the southwest United States. (And a great one, at that!) This idea seems preposterous now: radiation causing mutations that cause ants to grow to enormous sizes and feast on humans. Or atomic waste dumped into the ocean landing upon a human skull, creating a murderous zombie. Or a woman’s irradiated brain causing her grow to 50-feet — “incredibly huge, with incredible desires for love and vengeance!”
But is it really so preposterous? Radiation has taken on its own character, not only because of the immediate fears represented in film and books to the present day, but also because it is so hard to describe. It’s simultaneously envisioned as vats of poisonous waste, particles streaming from the sky — fallout — that can burn human skin, and atoms suspended in the air or water that can be incorporated into living tissue, festering there for decades and causing miniscule damage to DNA.
When I spoke to Tim Jannik of the Savannah River National Laboratory for The Scientist, he had to remind me what radiation really is. “What people often forget is that radiation is simply just energy,” he said. “When you get exposed to radiation, your body is absorbing energy.”
Ward Whicker, a radioecologist at Colorado State University whom I also interviewed for The Scientist, pushed it further: not only is radiation a simpler idea than that held in the public mindset, but it is also omnipresent in our lives. “People in general have a hard time understanding that we live in a very radioactive environment naturally,” he said. “Life has evolved in a radiation environment.”
Of course, we mostly think about radiation during times of crisis, whether it’s concern about nuclear weapons or, more recently, the flooding and subsequent breakdown of the Fukushima nuclear power plant. After discussing radiation for several days in terms of its hazard and then having these conversations of its basic nature in our environment, I realized that I really couldn’t remember very much about what radiation actually is! One thing led to another — or, rather, one wikipedia page led to another — and, after a few days of research, I felt I actually understood, on a basic level, how radiation works.
It felt wonderful to have radiation demystified! So, in case some of you are also struggling to think back to high school physics, I thought I’d write up what I learned.
The split nucleus
Radiation is a broad term that has taken on a very specific meaning for those of us who aren’t physicists. Radiation, from the same root as “ray,” describes something that travels in waves — and if you can think back to high school physics, remember that if something is a wave, it is also a particle. So sunlight — a form of radiation — is a wave, but is also composed of particles, photons. The same goes for UV and radio waves: Also particles. What we call “energy,” some amorphous force, is actually matter. Henceforth, when I mention “radiation” I will be referring to nuclear radiation: the waves/energy/particles that radiate from a nucleus.
As long as we’re on the subject of words that have taken on their own mythology, I may as well add another to the mix: nuclear. For some reason, I have two compartments for the word “nuclear” in my brain. One describes those tightly packed balls in the center of atoms, the nucleus, around which race electrons in various configurations. The other is more conceptual: a word that describes a great power, much like the One Ring, that can be used for good hypothetically, but is also dangerous.
But, of course, the actual meanings of nuclear in each case is one and the same, as nuclear power and all its gifts and danger are the product of activity at the nucleus of atoms — in particular, its breaking apart.
It takes a great deal of energy to compact neutrons and protons together into a tight ball and hold them there. So you can probably imagine that, if the nucleus does manage to break, energy is released. And this energy release, my friends, is nuclear radiation. (Its actual movement is also called radiation, but I’m referring to radiation as the actual energy.) There are different kinds of nuclear radiation — alpha particles, beta particles, gamma particles, neutrons, and others — that differ in what materials they are able to penetrate and the strength of the energy they carry. There are two ways a nucleus can break:
- It is unstable enough to disintegrate on its own.
- If it collides with another particle or nucleus with enough force.
A nucleus that is unstable enough to break apart on its own is an isotope, meaning it’s picked up a neutron or two that fly through the air constantly. This heavier nucleus cannot be held together by the same amount of energy, and thus it typically splits into smaller elements, a neutron or few, and energy. There are around 340 naturally-occurring isotopes that we know of that are radioactive in this way, but that’s leaving out some that (a) split so quickly that we can’t recognize their existence in the first place or (b) haven’t split yet since the formation of the earth, but they may still.
The second way for a nucleus to split is to be struck with another particle, a neutron for example, with enough force that the energy holding the nucleus together is disrupted and it breaks into smaller parts. This is the reaction that scientists organize in particle colliders to try and identify all the little particles released at the breaking point.
There really isn’t a huge distinction between these two methods: In both cases, a neutron strikes a nucleus, disrupting the energy holding it together, and causing it to break. It’s just a matter of time — either it happens immediately, or the neutron joins the nucleus for a little and, eventually, causes its demise.
This is the reaction that occurs in nuclear power plants. Many of these plants use Uranium-235, the only naturally occurring isotope that can sustain a nuclear chain reaction. A nuclear chain reaction is when one nuclear reaction leads to one or more — like knocking the first domino in a row. When Uranium-235 is struck with a neutron, it can release 3 more neutrons during its breakdown. If just one of these three manages to strike another atom of Uranium-235, another reaction occurs, ad infinitum.
And the main point: When any of these reactions or related ones occur, a bit of radiation (energy/waves/particles) is released with it, the extra energy that once was part of the nucleus and held it together. This energy is collected in nuclear power plants to generate electricity by heating water, for example. And it’s this energy that can do damage to our cells.
The danger of radiation
Most forms of radiation in our day-to-day lives are relatively benign — visible light, microwaves and radio waves, for example — and can’t do much harm, with their energy simply causing heat, if that.
But some radiation, such as alpha particles or UV rays, are tough little buggers that can interact and change other molecules that they run into by pulling off electrons. And it’s these resulting molecules — the oft advertised free radicals — that can damage DNA, causing mutations. With enough DNA damage, the cells commit suicide (apoptosis), and large amounts of cells dying quickly is what makes people exposed to large amounts of radiation to become sick.
If you’re in the vicinity of a large amount of radiation — such as when nuclear power plant cooling is disrupted, nuclear fuel is ignited, as at Chernobyl, or a nuclear bomb explodes — a lot of energy is reaching your body, both in the form of heat, which can cause burns, and nuclear radiation.
Cells in your body that divide very rapidly are the first to cause illness, as losing a group of these can quickly effect the total number due to their exponential growth. These are blood-forming bone marrow cells, and their damage can cause anemia due to a drop in red blood cells and a weakened immune system from a drop in white blood cells. Intestinal cells divide quickly, but not as rapidly as bone marrow cells, so they’re the next to be affected, causing symptoms such as nausea and vomiting, dehydration, and digestion trouble. At very high radiation doses, the cells that don’t divide are affected, in particular nerve cells, causing neurological problems from headache to coma. And these problems combined can kill.
If the radiation manages to damage DNA without killing the cells, this damage could still cause problems that could potentially lead to cancer later in life. Another concern for cancer-causing radiation is when radioactive isotopes accumulate in tissue, decaying and releasing energy within the body. (Read on! I dare you.)
Bioaccumulation of radioactive isotopes
Much of the concern in the aftermath of the Fukushima reactor accident has been about various radioactive isotopes: Iodine-131, Cesium-137, and Strontium-90, to name a few. These are isotopes that are taken up by the body when eaten and are incorporated into tissues because their biochemistry is similar to iodine, potassium, and calcium, respectively.
Non-radioactive iodine is necessary for proper thyroid function, but when Iodine-131 is taken up by the thyroid instead, the isotope is stored in the thyroid, slowly able to release its radioactive energy and particles over time. This long-term exposure in a very small area can lead to thyroid cancer. However, Iodine-131 decays relatively quickly: In just 8 days, a sample of Iodine-131 will be half the size it began. In other words: on average, half of the nuclei in the sample will have decayed after 8 days, proportional to but not an exact measure of the decay rate of a single nucleus (Thanks, Liz!). However, it releases alpha radiation, a stronger form of radiation that creates free radicals more easily and quickly, so it’s best to avoid it despite its short lifespan.
Cesium-137 and Strontium-90, however, take around 30 years each to halve in size, slowly releasing radiation over that period of time. Cesium-137 imitates potassium and is taken up into muscle tissue where it can remain for half a year before it is recycled out by proper potassium, giving it a fair bit of time to release radiation. Strontium-90 takes the place of calcium, building up in bone and bone marrow. Unfortunately, this isotope gets stuck there and isn’t cycled out like Cesium-137, and can cause bone cancers and leukemia.
These latter two — with 30 year half-lives — can accumulate in plants or animals and, when humans ingest them, become incorporated into our bodies. That is the fear behind much of the environmental impact talk: Will Strontium-90 enter the food chain? How far from the reactor will it spread? And how long do we have to wait before the food is safe again?
The answers to these questions are mostly unknown because we simply don’t have enough experience with them. As I will elaborate on in my next post, very little work has been done studying the ecosystems at Chernobyl, giving us little insight into how these isotopes remain in the environment.
Congratulations! You made it through. I hope I was able to successfully explain radiation and its basic effects to you. Please leave any questions, comments and corrections in the comments or send me an email.
Post edited 4/10/11 to clarify explanation of a radionuclide half-life
Throughout this arsenic-life NASA saga, I’ve been trying to pinpoint the fundamental reasons to explain why this story got out of hand. Why did NASA feel the need to uber-hype this research? Why the rush to publish research even if it may not have been ready?
I’ve drawn the conclusion that the primary cause is the need to be PURPOSEFUL while performing scientific research. For an example, I’ll take the research I currently work on. I study the aging process in yeast cells, focusing on how the cells’ epigenome changes as a cell gets “older.” We do this research under a federally-funded grant, for which our purpose is to study the aging process to help us better understand cancer and other age-related diseases.
But, to be honest, I don’t really care about cancer. I mean, I am someone who is perhaps a bit too comfortable with my mortality, but even beyond that: I actually just think the idea of different proteins and other factors manipulating what sections of DNA are translated and expressed is fascinating. I want to understand this process better – what proteins do what? how is this different in different cell types? how did this system evolve? – and this “aging grant” is really just an excuse for me to do so.
I doubt I’m alone here. I think a lot of scientists are more interested in uncovering the various processes, not for the good of mankind, but simply because we want to understand. (Correct me if I’m wrong, scientists.) I’d be happy to cure cancer along the way if I can, but in terms of my own goals and what is possible during my brief stint in this field, I just want to understand this system a little bit better than when I started.
Science wasn’t always done with a purpose. Think about Charles Darwin. Sure, he was interested in natural history, but he was on the Beagle to provide friendship to the captain. Along the way, he collected a bunch of samples of mockingbirds and finches and other organisms, and it wasn’t till decades later that he put the pieces together and formulated his theory of selection of the fittest. He didn’t collect specimens on his travels for any real purpose, but used the data he collected to draw conclusions later.
Of course, back then science was primarily done by rich men with too much time on their hands. Now science is the forefront of innovation and progress; we need more people than bored rich men to be studying it and, hell, anyone should get a chance to do so! But with greater knowledge and technology, we need more money. And since I’m not a rich bored man, I don’t have any money.
That’s where the government comes in: grants to fund research. But since it is taxpayers that are funding this research, it should have goals that will benefit those taxpayers. Thus I study aging and cancer. And these grants do keep us on task. If I find a cool mutation that alters the epigenome of my yeastie beasties and it’s not related to the aging process, I will not be following up on that project.
I go back and forth on whether this is a good thing. On the one hand, it keeps us accountable to the government and taxpayers, who give us our funding. But on the other hand, does research for a purpose help us really advance in biology, help us better understand how life works?
One of my bosses, a great scientist, doctor and philosopher king, recently emailed this quote to our lab from Carol Greider, a recent Nobel Prize winner for her work on the discovery of the aging-related enzyme telomerase:
“The quiet beginnings of telomerase research emphasize the importance of basic, curiosity-driven research. At the time that it is conducted, such research has no apparent practical applications. Our understanding of the way the world works is fragmentary and incomplete, which means that progress does not occur in a simple, direct and linear manner. It is important to connect the unconnected, to make leaps and to take risks, and to have fun talking and playing with ideas that might at first seem outlandish.”
This idea burns me to my very core. Purpose-based science assumes a certain knowledge of the systems we’re studying. But, let’s face it: we still have so much to learn. We’re all still flailing toddlers, trying to find a surface to hoist ourselves upon so that we can actually get somewhere. While scientists are often conceived to be smart and have all the answers, we actually don’t have many. The more you know, the more you know that you don’t know anything at all.
But instead of being allowed to play, to follow up on work because it’s exciting, to take risks, we have to make sure we stay within the limits of our funding and, thus, our purpose. Because “playing” or studying something because we think it’s AWESOME doesn’t provide evidence of “progress.”
I could be entirely wrong: maybe the old adage that progress is made in leaps and bounds (as opposed to baby steps, I suppose) is farcical. Maybe I only believe this because my human soul that thrives on chaos is drawn to it.
Either way: the purpose of research is overemphasized. When I read papers, I am interested in knowing how their discovery fits into “practical knowledge” (“There is hardly anything known about X disease, BUT WE FOUND SOMETHING!”), but more than that, I’m interested in how it fits in with the current model of whatever system they are studying. But that rarely gets as much attention in papers.
And this idea of “purpose” is why science in the media is so often overhyped. News articles often take a definitive stance on how the new study has contributed to the public good. Maybe it’s “eating blueberries will preserve your memory” or “sleeping 8 hours will make you attractive.” This makes the science easy to digest, sure, but it also paints an incomplete picture. These studies are just tiny pieces in a puzzle that scientists will continue to work on for decades. It’s pure hubris to believe that non-scientists cannot understand the scientific process – that they cannot understand that it takes incremental steps. But, nonetheless, if your research cannot be easily hyped, no one will hear about it, so you have to serve a purpose.
So with NASA’s arsenic-based life. The current model, both in funding and the media, of requiring purpose to justify research forced NASA to claim a greater purpose for its discovery: “an astrobiology finding that will impact the search for evidence of extraterrestrial life.”
To give both NASA and the researchers the benefit of the doubt, let’s just say they found this cool bug and wanted to share the news to get help in studying it, as author Oremland suggested. They submitted the paper to officially get the word out. But then they needed to find a “good reason” to have been studying arsenic microbes and NASA decided this was a good opportunity to reinvigorate its reputation of performing “useful science” so called a press conference. You know where it goes from here.
All that is pure speculation – but it probably isn’t too far from the truth. Maybe I’m being too kind, but I really doubt that the researchers or NASA had any ill-intentions. They simply lost control, and the following shitstorm took off.
We can scoff at them all we like: “an astrobiology finding that will impact the search for evidence of extraterrestrial life, my ass!” But it’s really not so different from my lab publishing a paper with the headline, “KEY FACTOR IN CELL AGING UNCOVERED” when, really, we just discovered a factor, and we don’t even know if it’s key.
The idea of “useful science” also dampens my feelings about science: SCIENCE IS COOL! Longing to pry up the corners of current knowledge isn’t enough: we can’t just look, but have to reveal a direct outcome. But if we don’t allow ourselves even to look because of various purpose-based limitations, we could be missing out on something FUCKING AWESOME!
I’m just rambling now – and am very interested in hearing your thoughts on this.
- Does purpose-driven science lead to better science or more innovation?
- Are there ways of judging research as worthy (e.g. for funding purposes) without having to provide a direct purpose?
- How should the media change its model for covering stories? Should every study that comes out get attention, or should we wait for more details and provide more review-like coverage?
- Would larger, field-based studies dampen competition? Would this help or hurt scientific progress?
Etc. etc. If you made it this far, thank you, xox, Hannah.
I never wrote what I meant to here – I simply could not keep up with the rest of the blogs regarding arsenic life! But I’m also aware that outside of the sci-twitter bubble, what went down may not be entirely clear, so here’s the beginning of the draft just in case you need to catch up.
Reactions to NASA’s announcement of “arsenic-based life” have been resounding through the science world these past few weeks. (For more thorough reviews, check out Bora Zivkovic’s link dump, Martin Robbins’s coverage, or the National Association of Science Writers.) For those who haven’t been scouring all the blogs, I’ve illustrated a brief review of events.
The saga began with a NASA press release announcing a press conference about a discovery about “an astrobiology finding that will impact the search for evidence of extraterrestrial life.” This led to speculation about aliens – life on Titan? – and then, when it was uncovered that the study was written by scientists studying the arsenic-rich Mono Lake in California, perhaps the researchers discovered a “shadow biosphere” – a form of life unlike ours, based on different molecules and evidence of a novel evolution of life on earth.
The actual paper, published in Science, instead announced that the scientists had found a bacterium that, when forced in a lab, could utilize arsenic in place of phosphorus in its molecules, including DNA. After the hype and build-up, it was a disappointing announcement. Due to the massive let-down, a lot of writers and scientists (myself included) chose to accept these findings instead of scrutinizing them. It is a bit embarrassing in hindsight, but after so much hype about a scientific discovery that engaged the public, I wanted to find something to stay excited about, if only to support the idea that science isn’t a complete sham.
Soon after, scientists began to carefully look at the researchers’ methods and found that these findings simply weren’t very well-supported. For the details, I recommend Rosie Redfield’s highly-publicized critique as well as Alex Bradley’s on We Beasties. The researchers’ evidence could easily have been contaminated, and they failed to do some fairly simple tests to definitively show the use of arsenic by these microbes. At this point, most media sources threw up their hands in frustration and stopped covering the story. (For more detail, see Carl Zimmer’s Slate piece, “This Paper Should Not Have Been Published.”)
This series of events led to some interesting reflections around the blogs, including thoughts on the peer review process, the scientific process, and the upside of public scientific debate. The authors’ refusal to engage the bloggy criticisms (even though they used the web to get everyone hyped about their discovery) got everyone into an uproar as well, as described in this open-access Nature editorial.
Things have been a bit quiet around here – and I think I finally figured out the problem last night. I moved a couple months ago and there is a high correlation between this life change and my inability to write. So I rearranged my apartment last night – hopefully having a more legit “desk” will help? I’m pathetic, I know. My space is important to me, ok!!
It occurred to me yesterday that while stressing out about my writer’s block, I totally missed my ONE-YEAR BLOGIVERSARY. Culturing Science and I have been together for a whole year! And while we’ve gotten in some arguments over time, mostly we’ve learned from one another.
I really do want to thank you guys for sticking it out with me this year. I went to a lecture last month and, afterwards, realized that all the topics he covered I did not know a year ago, but now understood because of writing this blog. So thank you for putting up with my learning and stumbling.
Which brings up my next question: WHO ARE YOU? Some famous bloggers sometimes do a roll call to get to meet some of their readers who don’t comment. I think it’s a nice idea, although I’m a little nervous that not even my dad will comment. But I really am interested in learning a little about you … so … will you leave me a comment? I’m nice and friendly, we can be internet friends? Yeah?
Recent news you might be interested in:
- Carnival of Evolution #30! Up at This Scientific Life
- The Molecular Biology blog carnival is up, hosted by the dear Labrat
- Carnival of the Blue #43 is being hosted by Alistair Dove’s Deep Type Flow
You may know that I am attending the Science Online conference, being held in the Science Triangle in January, for the first time! Not only that, but I’m on a panel to moderate a discussion about amateur blogging! (Full program here)
“But it’s just a blog!” – Hannah Waters, Psi Wavefunction, Eric Michael Johnson, Jason Goldman, Mike Lisieski and Lucas Brouwers
Many young people are eager to communicate science despite their lack of scientific and/or journalistic credentials. While all science communicators face challenges, this subgroup has their own set of challenges including cultivating a following of readers from scratch, and high levels of self-doubt, often referred to as “imposter syndrome.” What value does this rapidly-growing group of science communicators bring do the field? How can the science blogging community encourage and mentor young bloggers? How can we hold these individuals accountable to the high standards of science and journalism while simultaneously allowing them to make mistakes as part of the learning process? In addition, established and successful science communicators will be encouraged to share their tips and tricks with their newer colleagues.
And, lastly, I’m an reviewer for the 2010 edition of Open Lab! Open Lab is a yearly collection of the best science blog posts from the previous year, collected together into a physical book (!). This year, Jason Goldman is the editor and faces the monstrous task for sorting through the 900 nominated entries! So I’ll be helping out with some of the ecology posts. It’s a great honor and I’m very happy to support this fine publication.
Thanks for slugging through this poor excuse for a blog post. Don’t forget to check in in the comments! (Please, Dad, will you at least check in so I don’t feel like a total loser?)
When I read updates on blogs or the news about the BP oil spill, my expression is generally very serious: furrowed brow, pursed lips which I’m probably chewing in alternation with gnawing a nail. But last week I laughed out loud, a true LOL, a brash guffaw. (“What?!” my labmates inquired.)
I had read this New York Times article recounting the reactions of the executives of other oil companies during the Congressional hearing as they attempted to assert that this sort of accident would never occur at their own companies’ wells.
“We would not have drilled the well the way they did,” said Rex W. Tillerson, chief executive of Exxon Mobil.
“It certainly appears that not all the standards that we would recommend or that we would employ were in place,” said John S. Watson, chairman of Chevron.
“It’s not a well that we would have drilled in that mechanical setup,” said Marvin E. Odum, president of Shell.
The idea that this would never happen at another deep-sea well is preposterous to me. That the risks of drilling a mile into the ocean – to depths that require robots (yet another form of technology) for access, in order to draw back up pressurized matter from mostly unexplored pockets – can be calculated and prepared for seems absolutely ridiculous. And although the execs are using exact and technical language to ensure that they will never be made hypocrites, the message they are trying to send is: BP messed up. We act more responsibly and would never have made such mistakes. We should be allowed to continue drilling in the deep.
Many people seem ready to play the blame game, plug the whole thing on BP and call it a day. I, however, think that this accident presents an opportunity for us to reflect upon what it means to be a society reliant on complex technologies whose failures can cause disaster.
I. A little bit of theory…
When talking about risk theory and safety, two main ideas come up in the scholarship: Normal Accidents Theory (NAT) and High Reliability Organization Framework (HROF), which can you read about in quite thorough detail in this article from Organizational Studies.
The term “normal accidents” was coined by Charles Perrow in his 1984 book Normal Accidents: Living with High Risk Technologies (available on Google Books) to describe accidents that are not caused by a single, definite error – but are rather due to inherent problems in complex systems. The two qualities that lead towards “normal accidents” or “system accidents” are:
- A system complex enough that not all outcomes can be predicted, leading to a potential situation where 2 failures could interact in an unexpected way, hiding the true cause of the problem; and
- The system is “tightly coupled” – meaning that processes happen very quickly, and the parts are entwined so closely that individual parts cannot be separated from one another.
These two qualities combined create a system for which there is “insufficient time and understanding to control incidents and avoid accidents,” as the Organizational Studies article states.
Perrow himself compiled this theory after the incident at Three Mile Island. Three Mile Island was a nuclear reactor outside of Harrisburg, Pennsylvania which underwent a partial core meltdown in 1979. In this near-disaster, two seemingly contradictory “safety devices,” meant to alert the crew of problems in the reactor, went off simultaneously, distracting the staff from the real problem: a stuck steam valve. Luckily, an engineer put the pieces together with less than an hour to spare. This is an example of a “normal accident” – where the complexity of the reactor, that is, the system’s “normal” existence, nearly caused disaster itself.
In reaction to Normal Accident Theory, the more optimistic High Reliability Optimization Framework was born. It’s originators, Todd La Porte and Karlene Roberts, describe an alternate scenario, in which complex systems are able to run incredibly smoothly and without fail for long periods of time. Citing aircraft control operations as an example, they explain that the technology is not the issue, but rather the complexity of the organization. As long as all the people working on the ground are highly trained in both technical function of the system and safety, complex systems are not doomed to fail.
While both theories are flawed (as the article mentioned above outlines), I find the Normal Accidents Theory to be more useful. It seems obvious that if all employees are highly trained in all areas, things would flow smoothly. But, I’m sorry to report, that doesn’t seem to be the case for most systems and industries. Normal Accident Theory informs a different way of looking at technology and thinking about accidents – a view revealing that there is an inherent danger, and to be slightly wary. A useful view in terms of planning, training, and honesty.
II. Is the BP Oil Spill a “normal accident?”
The BP oil spill does not fit perfectly into the Normal Accident framework. There were a number of specific mistakes that were made that led to the spill – at least that’s what the reports are saying for now. (That is, it’s not “normal” unless cost-cutting and neglecting safety are considered “normal.” It does feel that way sometimes…) Upon hearing my initial lamenting at the onset of the spill, my father sent me this New Yorker article by Malcolm Gladwell in order to provide some “useful perspective.” (Thanks, Dad.) It was published in 1996 and is a reflection on a fatal (and comparable) accident that occurred 10 years prior: the Challenger explosion.
The Challenger was NASA’s second space shuttle, which underwent liftoff successfully 9 times. However, on its 10th liftoff in 1986, it exploded just 73 seconds off the ground, killing all seven crew members. The first 9 times, the rubber O-rings contained hot gas and kept it from ignition by rocket fire. But the 10th time they failed. Engineers had warned NASA that it was too cold for take-off, but the top men insisted that they stay on schedule. Thus it was a combination of mechanical failure and human hubris that caused the Challenger exposion.
The BP oil spill is a similar case. The hubris of man, the need to drill quickly and cheaply, led to cost-cutting and mechanical failure (as the media currently reports), resulting in a massive oil slick that will continue to grow in the months, if not the years, to come.
As I mentioned previously, I am not confident in deep-sea drilling technology. Granted, I don’t know much about it, and the current inundation of the interwebs in oil spill opinions makes finding reliable information nearly impossible. Maybe I’m the one being irrational here, but I just cannot see how this technology is not risky in and of itself. I am not confident in the other oil company executives, claiming that their systems are not flawed. While BP’s spill was not a “normal accident,” it does not preclude other rigs from having them.
This is why all the finger-pointing at BP irks me. They made some serious mistakes and will pay the consequences – I’m not letting them off the hook. But by having an easy scapegoat, we, the public, can easily ignore the greater issues at hand such as the inherent risk for disaster in these complex systems, or the fact that we’re drilling a mile deep into the ocean floor for fuel in the first place. It’s too easy to make this accident out to be a huge mistake made by greedy corporate white men instead of contemplating that fact that this could have happened just through the nature of the system.
In his book Inviting Disaster: Lessons from the Edge of Technology, James Chiles writes:
A lot of us are offering our lives these days to machines and their operators, about which we know very little except that occasionally things go shockingly wrong… Later study shows that machine disasters nearly always require multiple failures and mistakes to reach fruition. One mishap, a single cause, is hardly ever enough to constitute a disaster. A disaster occurs through a combination of poor maintenance, bad communication, and shortcuts. Slowly the strain builds.
We are all human. We all know what it’s like to procrastinate, to forget to leave a message, to have our minds wander. In his book, Chiles argues, citing over 50 examples in immense detail, that most disasters are caused by “ordinary mistakes” – and that to live in this modern world, we have to “acknowledge the extraordinary damage that ordinary mistakes can now cause.” Most of the time, things run smoothly. But when they don’t, our culture requires us to find someone to blame instead of recognizing that our own lifestyles cause these disasters. Instead of reconsidering the way we live our lives, we simply dump our frustration off so that we can continue living our lives in comfort.
It is too easy to ignore the fact that the risk of disaster comes with technology, especially ones that incorporate a form of energy such as nuclear power, rocket fuel, or, here, the potential energy of pressurized crude oil.
III. Prospective: incorporating Normal Accident Theory into our culture
At the beginning of his New Yorker article, Gladwell outlines the “ritual to disaster:” the careful exposition of the problems that went wrong, the governmental panel, the pointed fingers. Rereading it a month after I first received it, I can see this ritual unfolding before me. It occurs on the premise that we can learn from our mistakes – that the pinpointing of the precise events that led to disaster can help us avoid repeating ourselves. But Gladwell asks: “What if these public post mortems don’t help us avoid future accidents? … [Perhaps they] are as much exercises in self-deception as they are genuine opportunities for reassurance.”
If Chiles and Perrow are right – if risk and thus potential accident are built into the nature of complex machinery run by humans – we should not be reassured. We can certainly learn from our mistakes and try to keep replicate disasters from occurring. But, as Chiles points out, if all concern is thrown into the one part of the system that has been harmed before, it will only leave other parts to corrode and rust without our notice.
What would it mean for us to “accept” that our technology is flawed, that “normal accidents” will occur? It would not lessen the impact of disasters. But if an acceptable discourse could be developed to address inherent risk in machines without striking fear into the masses, if the topic were no longer untouchable or taboo, we could better prepare for “normal accidents.” For while industries mostly employ specialists these days, in these accidents (or near-accidents), the answer comes instead from large-scale thinking. Chiles describes it as a game of chess in which “a chess master spends more time thinking about the board from his opponent’s perspective than from his own.”
We have to combine our risk assessment theories – we have to aim for the optimistic High Reliability Optimization Framework, trying to turn as many people on the team into “chess masters” as possible, without getting overconfident. Although “normal accidents” cannot be predicted, the HROF should include training in what a “normal accident” is. Even the mere knowledge that the machinery may not always act the way its supposed to is better than nothing.
But for now, the disaster ritual will continue, just as it did with the Challenger and other disasters. BP will take the blame and foot the bill. In several months or years, there will be a public apology and ceremony to remember the 11 rig workers who died. And the President will announce: We have learned our lesson from the BP spill. We will not make this mistake again. Deep-sea drilling is reopened, we are reborn. “Your loss has meant that we could confidently begin anew,” as Captain Frederick Hauck said of the Challenger in 1988.
There are other fundamental differences between the BP oil spill and the other man-made disasters: its expanse in both space and time. The Challenger explosion, while a great tragedy, was swift. There were no long-term effects felt by the general public (excepting the families of the astronauts). But this spill is far from over. By ignoring the inherent risks in deep-sea drilling, we are potentially setting ourselves up for another long-term disaster, affecting millions of people, wildlife, ecosystems. I don’t think we can afford a repeat.
Gephart, R. (2004). Normal Risk: Technology, Sense Making, and Environmental Disasters Organization & Environment, 17 (1), 20-26 DOI: 10.1177/1086026603262030
Gladwell, Malcolm. 1996. “Blowup.” The New Yorker. Jan 22, 36.
Leveson, N., Dulac, N., Marais, K., & Carroll, J. (2009). Moving Beyond Normal Accidents and High Reliability Organizations: A Systems Approach to Safety in Complex Systems Organization Studies, 30 (2-3), 227-249 DOI: 10.1177/0170840608101478
Perrow, Charles. Normal Accidents: Living with high-risk technologies. Princeton, NJ: Princeton University Press, 1984.
Weick, K. (2004). Normal Accident Theory As Frame, Link, and Provocation Organization & Environment, 17 (1), 27-31 DOI: 10.1177/1086026603262031