Archive for June 2010
To some people, a volcanic eruption means “Ahh! Run! Hot Lava!” But to others, it means “SCIENCE!” To those studying hydrothermal vent communities, that is (and a wide berth of geologists).
Hydrothermal vents are cracks in the seafloor formed when tectonic plates spread apart, which spew out hot, mineral-rich water from the interior of the earth. Thus they are most commonly found on sea ridges, such as the Mid-Atlantic Ridge and the East Pacific Rise, where two or more tectonic plates meet and clash underwater. These vents host exotic communities of organisms. All the hot, mineral-rich water attacts chemosynthetic bacteria, or bacteria that get their energy from chemicals instead of the sun (as there is no sunlight on the seafloor), which in turn attract organisms that graze on the bacteria.
An important organism in many hydrothermal vent communities is the tube worm. It can grow a couple meters long and lives inside of a tube that it builds for itself out of chitin, into which they can retreat in case predators are around. The weirdest thing about tube worms: they have no mouth or digestive tract! For their food, they require a symbiotic relationship with chemosymbiotic bacteria, drawing their nutrients from the bacteria, presumably in return for the nice home.
This is just one of the many strange and diverse organisms found in hydrothermal vent communities. There have been over 300 new species identified since the first vent was discovered in 1977. However, due to their nature, these vents and their communities are ephemeral: just as easily as they are created by the spreading of the earth’s plates, they are just as easily closed off. Once the mineral-rich water is gone, so are the bacteria, causing many of the species inhabiting the vent (including tube worms) to go locally extinct.
These communities present an interesting question to biologists: from where do these communities come? The two dominant hypotheses are: (1) there is a well-mixed pool of larvae that colonize new vents (similar to the “everything is everywhere” hypothesis about microbial distribution I pondered here), and (2) vent communities are created from larvae supplied by local populations through migration. (Side note: Marine dispersal has been a hot topic on Research Blogging this week! I recommend this post by the recently-graduated LabRat (Congratulations!) on vertical distribution of microbes by hitchhiking on plankton, as well as this post on Southern Fried Scientist about how “ghost populations” affect marine migration.)
A group of scientists from Woods Hole had been studying a number of vents along the East Pacific Rise, a huge ridge cutting across the center of the Pacific, when they noticed that one of their communities had disappeared! Lava from an underwater volcanic eruption had paved over the vents and their communities, killing off species in almost the entire area (RIP). But this lava did not clog all the vents: some of the vents (as well as freshly created ones) continued to spew the hot, mineral-rich water. As the scientists had collected data on the local pool of larvae and the pre-eruption community, it presented a perfect opportunity to study marine dispersal by comparing the vent communities in this area before and after the eruption. If there is a general pool of larvae, they expected a similar community before and after the event. A distinct community post-eruption would signal local migration.
In their 2010 PNAS paper, the scientists found that both the larvae found in the vicinity and the species that settled to colonize the vent area differed drastically from those found before (see figure below). The dominant tubeworm species was Tevnia jeichonana, replacing Riftia pachyptia (see figure above for images). Most interesting was the arrival of Ctenopelta porifera, which had never been found at the site before – the nearest known population is 300 km away! These data suggest that, at the least initial community, arrives through the second hypothesis: supply through local populations and not a “well-mixed, time-invariant larval pool.”
There are some possible reasons for this. The new vents could be spewing water that has a different biochemical makeup, supporting a distinct species of bacteria and thus a distinct community of colonizers. The authors hadn’t done chemical analysis yet (which would have made it a stronger paper, in my opinion), but offered this as a possibility. Additionally, as I mentioned above, this is just the initial community. The authors found T. jerichonana as the dominant tubeworm species, which they have seen replaced by R. pachyptila (the previous dominant species) and later by the mussel Bathymodiolus thermophilus at other vents over time. This creates the possibility that these vents are colonized initially through local populations and after they are “broken in” and deemed habitable, other more robust species move in, ousting the previous colonizers. Where these species come from, either from a well-mixed pool or local populations, only time will tell.
And that’s what’s so great about ecology: this experiment isn’t over! The scientists are surely continuing to collect data on these vent communities, and over the decades we will be able to follow them through their succession. So keep your eyes open for the next paper from these scientists to hear the rest of the story…
Mullineaux, L., Adams, D., Mills, S., & Beaulieu, S. (2010). Larvae from afar colonize deep-sea hydrothermal vents after a catastrophic eruption Proceedings of the National Academy of Sciences, 107 (17), 7829-7834 DOI: 10.1073/pnas.0913187107
As I warned recently, RSS feeds have gotten a little wonky with the URL change. So I’ve switched back to the wordpress address, with the .com forwarding here. It was really an issue of pride and I’m over it – it doesn’t matter much in the end.
BUT! It means that you guys on the wordpress RSS feed have missed a couple items, one of which has been in the works for the past month and thus I’m eager to share and hear your reactions.
- The blood n’ sweat post, entitled Inevitability and Oil, Pt. 1: the inherent risk for accidents in complex technology, is my attempt to cast a wider net on the BP oil spill. The media has mainly focused on nitpicking the details, but I think this “presents an opportunity for us to reflect upon what it means to be a society reliant on complex technologies whose failures can cause disaster.” Comment or email me and let me know what you think!
- I also posted a beautiful video of deep sea squids compiled by MBARI
I deeply apologize to the 2 of you on the .com feed who are forced to read these news updates twice.
Speaking of computers… I’m in the market for a new laptop. This baby is old enough that it’s missing 3 keys (with many others failing regularly, including the “enter” key) and has zero battery. Love yours? Hate it? I need advice! Seriously. I cannot even choose what kind of bread to get at the supermarket. Tips to hannah.waters [at] gmail [dot] com
Thanks for reading!
Tweet at me: @CulturingSci
EDIT: And… this post reset the RSS feed. And the world is back in order. Sorry folks.
When I read updates on blogs or the news about the BP oil spill, my expression is generally very serious: furrowed brow, pursed lips which I’m probably chewing in alternation with gnawing a nail. But last week I laughed out loud, a true LOL, a brash guffaw. (“What?!” my labmates inquired.)
I had read this New York Times article recounting the reactions of the executives of other oil companies during the Congressional hearing as they attempted to assert that this sort of accident would never occur at their own companies’ wells.
“We would not have drilled the well the way they did,” said Rex W. Tillerson, chief executive of Exxon Mobil.
“It certainly appears that not all the standards that we would recommend or that we would employ were in place,” said John S. Watson, chairman of Chevron.
“It’s not a well that we would have drilled in that mechanical setup,” said Marvin E. Odum, president of Shell.
The idea that this would never happen at another deep-sea well is preposterous to me. That the risks of drilling a mile into the ocean – to depths that require robots (yet another form of technology) for access, in order to draw back up pressurized matter from mostly unexplored pockets - can be calculated and prepared for seems absolutely ridiculous. And although the execs are using exact and technical language to ensure that they will never be made hypocrites, the message they are trying to send is: BP messed up. We act more responsibly and would never have made such mistakes. We should be allowed to continue drilling in the deep.
Many people seem ready to play the blame game, plug the whole thing on BP and call it a day. I, however, think that this accident presents an opportunity for us to reflect upon what it means to be a society reliant on complex technologies whose failures can cause disaster.
I. A little bit of theory…
When talking about risk theory and safety, two main ideas come up in the scholarship: Normal Accidents Theory (NAT) and High Reliability Organization Framework (HROF), which can you read about in quite thorough detail in this article from Organizational Studies.
The term “normal accidents” was coined by Charles Perrow in his 1984 book Normal Accidents: Living with High Risk Technologies (available on Google Books) to describe accidents that are not caused by a single, definite error – but are rather due to inherent problems in complex systems. The two qualities that lead towards “normal accidents” or “system accidents” are:
- A system complex enough that not all outcomes can be predicted, leading to a potential situation where 2 failures could interact in an unexpected way, hiding the true cause of the problem; and
- The system is “tightly coupled” – meaning that processes happen very quickly, and the parts are entwined so closely that individual parts cannot be separated from one another.
These two qualities combined create a system for which there is “insufficient time and understanding to control incidents and avoid accidents,” as the Organizational Studies article states.
Perrow himself compiled this theory after the incident at Three Mile Island. Three Mile Island was a nuclear reactor outside of Harrisburg, Pennsylvania which underwent a partial core meltdown in 1979. In this near-disaster, two seemingly contradictory “safety devices,” meant to alert the crew of problems in the reactor, went off simultaneously, distracting the staff from the real problem: a stuck steam valve. Luckily, an engineer put the pieces together with less than an hour to spare. This is an example of a “normal accident” – where the complexity of the reactor, that is, the system’s “normal” existence, nearly caused disaster itself.
In reaction to Normal Accident Theory, the more optimistic High Reliability Optimization Framework was born. It’s originators, Todd La Porte and Karlene Roberts, describe an alternate scenario, in which complex systems are able to run incredibly smoothly and without fail for long periods of time. Citing aircraft control operations as an example, they explain that the technology is not the issue, but rather the complexity of the organization. As long as all the people working on the ground are highly trained in both technical function of the system and safety, complex systems are not doomed to fail.
While both theories are flawed (as the article mentioned above outlines), I find the Normal Accidents Theory to be more useful. It seems obvious that if all employees are highly trained in all areas, things would flow smoothly. But, I’m sorry to report, that doesn’t seem to be the case for most systems and industries. Normal Accident Theory informs a different way of looking at technology and thinking about accidents – a view revealing that there is an inherent danger, and to be slightly wary. A useful view in terms of planning, training, and honesty.
II. Is the BP Oil Spill a “normal accident?”
The BP oil spill does not fit perfectly into the Normal Accident framework. There were a number of specific mistakes that were made that led to the spill – at least that’s what the reports are saying for now. (That is, it’s not “normal” unless cost-cutting and neglecting safety are considered “normal.” It does feel that way sometimes…) Upon hearing my initial lamenting at the onset of the spill, my father sent me this New Yorker article by Malcolm Gladwell in order to provide some “useful perspective.” (Thanks, Dad.) It was published in 1996 and is a reflection on a fatal (and comparable) accident that occurred 10 years prior: the Challenger explosion.
The Challenger was NASA’s second space shuttle, which underwent liftoff successfully 9 times. However, on its 10th liftoff in 1986, it exploded just 73 seconds off the ground, killing all seven crew members. The first 9 times, the rubber O-rings contained hot gas and kept it from ignition by rocket fire. But the 10th time they failed. Engineers had warned NASA that it was too cold for take-off, but the top men insisted that they stay on schedule. Thus it was a combination of mechanical failure and human hubris that caused the Challenger exposion.
The BP oil spill is a similar case. The hubris of man, the need to drill quickly and cheaply, led to cost-cutting and mechanical failure (as the media currently reports), resulting in a massive oil slick that will continue to grow in the months, if not the years, to come.
As I mentioned previously, I am not confident in deep-sea drilling technology. Granted, I don’t know much about it, and the current inundation of the interwebs in oil spill opinions makes finding reliable information nearly impossible. Maybe I’m the one being irrational here, but I just cannot see how this technology is not risky in and of itself. I am not confident in the other oil company executives, claiming that their systems are not flawed. While BP’s spill was not a “normal accident,” it does not preclude other rigs from having them.
This is why all the finger-pointing at BP irks me. They made some serious mistakes and will pay the consequences – I’m not letting them off the hook. But by having an easy scapegoat, we, the public, can easily ignore the greater issues at hand such as the inherent risk for disaster in these complex systems, or the fact that we’re drilling a mile deep into the ocean floor for fuel in the first place. It’s too easy to make this accident out to be a huge mistake made by greedy corporate white men instead of contemplating that fact that this could have happened just through the nature of the system.
In his book Inviting Disaster: Lessons from the Edge of Technology, James Chiles writes:
A lot of us are offering our lives these days to machines and their operators, about which we know very little except that occasionally things go shockingly wrong… Later study shows that machine disasters nearly always require multiple failures and mistakes to reach fruition. One mishap, a single cause, is hardly ever enough to constitute a disaster. A disaster occurs through a combination of poor maintenance, bad communication, and shortcuts. Slowly the strain builds.
We are all human. We all know what it’s like to procrastinate, to forget to leave a message, to have our minds wander. In his book, Chiles argues, citing over 50 examples in immense detail, that most disasters are caused by “ordinary mistakes” – and that to live in this modern world, we have to “acknowledge the extraordinary damage that ordinary mistakes can now cause.” Most of the time, things run smoothly. But when they don’t, our culture requires us to find someone to blame instead of recognizing that our own lifestyles cause these disasters. Instead of reconsidering the way we live our lives, we simply dump our frustration off so that we can continue living our lives in comfort.
It is too easy to ignore the fact that the risk of disaster comes with technology, especially ones that incorporate a form of energy such as nuclear power, rocket fuel, or, here, the potential energy of pressurized crude oil.
III. Prospective: incorporating Normal Accident Theory into our culture
At the beginning of his New Yorker article, Gladwell outlines the “ritual to disaster:” the careful exposition of the problems that went wrong, the governmental panel, the pointed fingers. Rereading it a month after I first received it, I can see this ritual unfolding before me. It occurs on the premise that we can learn from our mistakes – that the pinpointing of the precise events that led to disaster can help us avoid repeating ourselves. But Gladwell asks: “What if these public post mortems don’t help us avoid future accidents? … [Perhaps they] are as much exercises in self-deception as they are genuine opportunities for reassurance.”
If Chiles and Perrow are right – if risk and thus potential accident are built into the nature of complex machinery run by humans – we should not be reassured. We can certainly learn from our mistakes and try to keep replicate disasters from occurring. But, as Chiles points out, if all concern is thrown into the one part of the system that has been harmed before, it will only leave other parts to corrode and rust without our notice.
What would it mean for us to “accept” that our technology is flawed, that “normal accidents” will occur? It would not lessen the impact of disasters. But if an acceptable discourse could be developed to address inherent risk in machines without striking fear into the masses, if the topic were no longer untouchable or taboo, we could better prepare for “normal accidents.” For while industries mostly employ specialists these days, in these accidents (or near-accidents), the answer comes instead from large-scale thinking. Chiles describes it as a game of chess in which “a chess master spends more time thinking about the board from his opponent’s perspective than from his own.”
We have to combine our risk assessment theories – we have to aim for the optimistic High Reliability Optimization Framework, trying to turn as many people on the team into “chess masters” as possible, without getting overconfident. Although “normal accidents” cannot be predicted, the HROF should include training in what a “normal accident” is. Even the mere knowledge that the machinery may not always act the way its supposed to is better than nothing.
But for now, the disaster ritual will continue, just as it did with the Challenger and other disasters. BP will take the blame and foot the bill. In several months or years, there will be a public apology and ceremony to remember the 11 rig workers who died. And the President will announce: We have learned our lesson from the BP spill. We will not make this mistake again. Deep-sea drilling is reopened, we are reborn. “Your loss has meant that we could confidently begin anew,” as Captain Frederick Hauck said of the Challenger in 1988.
There are other fundamental differences between the BP oil spill and the other man-made disasters: its expanse in both space and time. The Challenger explosion, while a great tragedy, was swift. There were no long-term effects felt by the general public (excepting the families of the astronauts). But this spill is far from over. By ignoring the inherent risks in deep-sea drilling, we are potentially setting ourselves up for another long-term disaster, affecting millions of people, wildlife, ecosystems. I don’t think we can afford a repeat.
Gephart, R. (2004). Normal Risk: Technology, Sense Making, and Environmental Disasters Organization & Environment, 17 (1), 20-26 DOI: 10.1177/1086026603262030
Gladwell, Malcolm. 1996. “Blowup.” The New Yorker. Jan 22, 36.
Leveson, N., Dulac, N., Marais, K., & Carroll, J. (2009). Moving Beyond Normal Accidents and High Reliability Organizations: A Systems Approach to Safety in Complex Systems Organization Studies, 30 (2-3), 227-249 DOI: 10.1177/0170840608101478
Perrow, Charles. Normal Accidents: Living with high-risk technologies. Princeton, NJ: Princeton University Press, 1984.
Weick, K. (2004). Normal Accident Theory As Frame, Link, and Provocation Organization & Environment, 17 (1), 27-31 DOI: 10.1177/1086026603262031
I have a couple of internet-related site updates to disperse.
First of all, I have bought my very own domain name. I feel like a real adult now! Culturing Science is still available at its WordPress address, but you can also access it at http://culturingscience.com. (If you’re in a hurry and can’t waste time with the extra keystrokes…?)
That said, RSS feeds may get a little bit funky. I’m 90% sure the wordpress one will still update in a timely matter, but if you want to be safe, you can subscribe to the new feed here.
The biggest news is that I’m now on Twitter! I’ll be using it mainly to share great blog posts and other interesting science tidbits on the web, so if you’re on twitter, let’s be follow buddies/develop true friendship. The name is @CulturingSci
Have a good weekend. USA!
PS: That last bit was a farce, I actually don’t care about soccer. I’m a baseball girl to the core. Go Trenton Thunder! Go Phils!
In any high school biology class1, we learn that isolation is key to the evolution of species. For example, take Australia, where an array of marsupials such as koalas and kangaroos reproduce like no other animals on the planet. Isolation on a continental island allowed ancestral marsupials to evolve gestation via pouch, a trait which was retained as these animals later evolved into multiple (cuddly) species. In other words: an event that happened in the past resulted in the organisms we see today, or the history of a species influences its current form and life history.
We attribute the distribution of species on this planet, also known as biogeography, to these sorts of historical events. Organisms evolved, and continue to evolve, the way they do due to historical circumstances out of their control, creating the biodiversity of our world. The idea of biogeography is generally attributed to Lamarck, and throughout the late-18th and early-19th centuries (pre-Darwin, mind you), scientists suggested many reasons for the non-uniform distribution of organisms, with Lyell summing up these historical factors as a combination of environment and dispersal through migration, passive (e.g. seeds carried in the wind) or active (e.g. elephants walking across the plain).
However, not all organisms seemed to fit this pattern. Scientists at this time observed that, while polar bears were limited to the arctic and monkeys to warm climes, organisms such as fungi, sponges, algae and lichens were far more ubiquitous. The botanist Kurt Sprengel, in summary of a common thought, wrote that organisms of “lower organization” must have greater ability to disperse, allowing them to colonize more broadly and thrive where “circumstances propitious to their production occur.” (For a full history, see Maureen O’Malley’s commentary in Nature Reviews Microbiology.)
In 1934, the Dutch biologist Lourens Baas-Becking revived this idea, with the thought that the typical explanations of biogeography do not fit with the world of microorganisms. He saw the same species of microbe living in different places on the globe and in variable environments. Thus, he posited that historical factors such as isolation and environment could not be the forces determining microbial distribution, but rather that “everything is everywhere; the environment selects.” The small size and abundance of many microbe species allowed them to be easily dispersed in water, on wind, on the bodies of animals, spreading them all over the planet. Many microbes can also lie dormant for a long time until conditions improve, or until the “environment selects” them. This would, in effect, create what’s been termed a “seed bank” of microbes, where all microbes are in all environments at the same time, lying in wait for environmental conditions to favor their proliferation.
For most of the 20th century, this so-called “Bass-Becking Hypothesis” was widely accepted, but in the past few decades has been hotly debated. In 2004, Tom Fenchel and Bland Finlay compiled a literature review in Bioscience in favor of the hypothesis, arguing that “habitat properties alone are needed to explain the presence of a given microbe, and historical factors are irrelevant.” They reviewed studies which showed the ubiquity of microbe species with fewer habitat requirements (generalists, if you will), as well as microbe species that are environmentally specific but are found in their preferred habitats on many continents. Of note is a 1997 Oikos study that they themselves published, wherein they found 20 living microbe species in a lake sample. Upon altering conditions (such as food source, temperature, acidity, and oxygen levels), they were able to revive an additional 110 species – evidence supporting the idea of a “seed bank” of microbes. The authors do note that this theory may only apply to the most common microbe species, since not all are able to dessicate and revive – but perhaps this ability is what made them so widespread in the first place.
One caveat with this study is that the authors advocate for a phenotypic analysis of microbes. While the ability to study DNA was a huge benefit to the field of microbiology, the authors do not agree that this is useful due to the wide genetic variability even within a single microbial population, and thus rely on morphology to describe species instead of genetic analysis. A 2006 review, including genetic analyses, found that things aren’t so cut and dry. The authors cite a number of studies showing reproducible genetic differences within microbe species even along a 10-meter transect in a marsh. In two hot springs thousands of kilometers apart, despite living in the same environment, two species of bacteria (Synechococcus and Sulfolobus) showed significant genetic differences. This shows that isolation alone can affect genes, and thus ultimately species, “overwhelming any effect of environmental factors.”
Both reviews note that there is not enough data out there to draw strong conclusions; the 2006 study was relying on 10 articles alone to determine distance and environmental signficance. To me the differences in these studies come down to how one defines a “species.” Typically, we differentiate species based on an organism’s ability to produce fertile offspring with another – if they can, they are the same species. (There are many caveats to the “species problem” beyond my scope right now. For a really thorough write-up, see this post from the Wild Muse.) However, most microbes reproduce via cell division, and genes can be transferred horizontally despite “species” boundaries. So how do we even define a microbial species in the first place? If we’re looking at evolution alone, it would seem that genetic differences even within microbes that are commonly described as the same species morphologically would be meaningful, as these genetic differences put them on the path to become novel species.
One major question that the idea of “everything is everywhere” brings up is: how do microbes evolve in the first place? If these organisms are relatively free from the external pressures of isolation and environment, going locally extinct or reviving based on their surrounding conditions, evolution must take an incredibly long time.
I could not find a paper on biogeography and microbial evolution; however, a paper in PLoS published in April 2010 looked at the biogeography and large-scale evolution of phytoplankton in the ocean. In light of questions I’m asking here, oceanic plankton and microbe communities are very similar. They are both small organisms primarily dispersed passively, by ocean currents in the case of plankton. The ocean hosts a wide variety of environments, and plankton are also generally considered to be everywhere at once. While it is not ideal, I will use this planktonic model to look at biogeography and evolution in a more specific system. (Well, as specific as you can get with the ocean…)
Just as the determinants of microbial biogeography haven’t been concluded, the same is true of plankton. In this study, the authors sampled planktonic communities in two very different ocean environments: subtropical/tropical oceans, characterized by similar conditions throughout a wide geographical range, highly stratified ocean layers, and nutrient-poor surface waters, and sub-Arctic waters, characterized by high vertical mixing and high nutrient levels across the water column. They compared 250-ml samples pairwise from each of the oceanic habitats and found that the planktonic communities were “strikingly dissimilar.” However, when they increased their sample size 100-fold to 25 liters, they found that these contrasting ocean environments shared 76% of their total species pool! This effect is surely found in many microbial studies: when comparing diversity between smaller plots, you are more likely to find a difference. But an increase in plot size, even within the same environment, will find more similarities. (Which is a more meaningful measurement is another question… I’d be happy to hear your comments on that one.)
To look at the evolution of phytoplankton, the authors took core samples from four distinct geographic environments and then identified fossil diatom species within from 240 million years ago to the present, generating “community assemblages” of diatoms through time. They then compared these communities assemblages with environmental factors: global CO2 concentrations and oceanic upwelling strength. The authors found that, despite “local determinants such as regional current systems, terrestrial nutrient inputs, atmospheric deposition, physical mixing, etc.,” global climate measures largely predicted the diatom community assemblage, with many species recovering after local extinction. That’s right: even after the extinction of a species, when preferable environmental conditions returned, so did the diatom.
This study provides a clue regarding the importance of environmental conditions to the global distribution of abundant, passively dispersed organisms. What is also interesting is that the same diatom species were found again and again over the course of 240 million years. Their ability for high dispersal and recovery of species enables planktonic communities to evolve “slowly and gradually” over time.
But clearly they have evolved: plankton (and microbes) are incredibly diverse clades. The question to look at now is how is evolution driven in highly dispersed organisms?
And thus, as usual, they are the tiniest organisms that force us to broaden our view on basic tenets of biology. Just as horizontal gene transfer did for traditional natural selection, now microbial dispersal does for the evolution of species.
It does give me a great deal of hope regarding life on this planet: the possibility that there is a cache of microbes waiting around for the perfect conditions, even ones not suitable for us. As my father, Dennis P. Waters (who needs a blog), once put it, “As long as there’s bacteria, there’s hope.”
1That is, in one where evolution is taught at all…
Cermeño, P., de Vargas, C., Abrantes, F., & Falkowski, P. (2010). Phytoplankton Biogeography and Community Stability in the Ocean PLoS ONE, 5 (4) DOI: 10.1371/journal.pone.0010037
Fenchel, T., & Finlay, B. (2004). The Ubiquity of Small Species: Patterns of Local and Global Diversity BioScience, 54 (8) DOI: 10.1641/0006-3568(2004)054[0777:TUOSSP]2.0.CO;2
Martiny, J., Bohannan, B., Brown, J., Colwell, R., Fuhrman, J., Green, J., Horner-Devine, M., Kane, M., Krumins, J., Kuske, C., Morin, P., Naeem, S., Øvreås, L., Reysenbach, A., Smith, V., & Staley, J. (2006). Microbial biogeography: putting microorganisms on the map Nature Reviews Microbiology, 4 (2), 102-112 DOI: 10.1038/nrmicro1341
O’Malley, M. (2007). The nineteenth century roots of ‘everything is everywhere’ Nature Reviews Microbiology, 5 (8), 647-651 DOI: 10.1038/nrmicro1711
I’m a few weeks behind on this… but head over to NeuroDojo for the latest Carnival of Evolution – brain edition.
Some greats posts up: the evolution of the silica skeletons of glass sponges at Deep Sea News; how human’s evolution to fill a “cognitive niche” encouraged us to become scholars; the comparative evolution of brains and microchips; and NeuroDojo’s own post on the origin of bone in vertebrates. Head over for more great posts!
The 25th Carnival of Evolution will be hosted right here at Culturing Science! Submit posts here, and see you next month.