Archive for the ‘Reflection’ Category
In which Hannah treats her blog like her livejournal circa 2003. Livejournalism — reflections on being a blogger-turned-journalist.
Two-and-a-half weeks ago, I started interning at The Scientist magazine. One day I was a scientist, and then decided to leave it all to try my hand at writing about science for a living. I really didn’t know what to expect, but as the start of the internship crept toward me, I started to become afraid.
My only writing has been on the web, with unlimited space and freedom of form. I knew that, writing professionally, I would no longer have the freedom to wax philosophical and provide pages of background, as I am wont to do. Would I be forced to play into hype? Bend or even break my principles? Would I leave completely jaded, turned off from science writing altogether?
None of my fears have been confirmed thus far; in fact, the past weeks have been both amazing fun and full of learning. In each story I’ve covered, I have been able to see the hype point but have always pushed it far, far away from me. (And I even laugh a little when I see the story distorted elsewhere — that is, before I start crying for humanity.)
I’ve spent the last couple days researching a news story about a PLoS ONE paper about human contamination in databases of genomic sequences that went up tonight. I was absolutely fascinated by the topic. The second I spotted the paper, I contacted the scientist, Rachel O’Neill, to hear from her the story behind her research.
Let me tell you — once you start talking to scientists about their work, you can never go back. There’s really nothing better than to hear what they care about, why they did the work, and, of course, to get the details of the methods explained without having to read wikipedia articles on various genetics tools just to get a sense of it. It is the most fun part of the job (and also the part I was most afraid of).
After I talked to Rachel, I needed to get a second and even third source to read the paper and let me know what they think. A scientist can’t help but give a biased account of her own work, after all. I tried to contact a number of people who work in large-scale genomic sequencing, but no one was willing to give me an interview. Maybe I picked people who were too big name and above puny me; or maybe the story seemed a bit too controversial and researchers didn’t want their names tied to it. Either way, I was struggling, and I needed help.
“I have tons of scientist friends on the internet!” popped into my head. It seemed strange, like I was crossing an illegal boundary between my life as a “real journalist” and a blogger. This effect is particularly strong for me considering that I’m a huge idealist and can’t escape my view that the world is a meritocracy. I need to make it as a journalist on my own, without my bloggy friends! Right?
Nope. I needed help so I contacted Jonathan Eisen of the Tree of Life, a big-shot genomicist, open access advocate, and evolutionary biologist. (Did I mention that I might be his #1 fan in a creepy internet way? Hi, Jonathan! What’s up?) I learned a ton from him during our talk and it was such a joy to speak to him as he biked across campus (in the wind – what a champ!). I felt a little strange about it, again like I was breaking some rule because he, like, retweeted me one time or something. Was I being biased by talking to him? was the thought running through my mind.
When reading the news later it struck me that Monday’s news on a lateral gene transfer between humans and gonorrhea could just be an instance of contamination – an idea that was confirmed by Eisen. I frantically emailed my boss (Literal: “This could be REAL SCIENCE JOURNALISM, gritty like in the movies!” to which she responded, “I’m confused.” A little over-excitable, perhaps?) I emailed Rachel O’Neill about it, but didn’t know how to proceed further.
So, with a sigh, I settled down to read my Research Blogging RSS feeds in the morning. And what did I come across? A blog post with the title: “Human DNA in bacterial genomes? Yes? No? Maybe?” Could this be a dream come true? I went to the blog and, sure enough, it was written by a genomicist, Mark Pallen. I emailed him immediately, and later called him up to discuss whether the data in the gonorrhea paper definitively proved that there was no contamination in the database.
I wrote up the story, had a blast doing so, and now it’s on the front page of the website.
For all the fighting I’ve done with myself to avoid being biased in choosing my sources, maybe Jonathan and Mark deserve to be chosen. They are active communicators, expressing their desire and ability to explain science on the web. Is that really a bias? Isn’t that really just.. a logical choice?
In which case… shouldn’t the science blogging community, and Research Blogs in particular, be a goldmine for journalists? It’s basically a list of scientists (and others.. but many scientists) who want to talk about what they’re doing, make a point to keep in touch with what’s happening in their field (and others), and whose work you can evaluate before even speaking with them. Is this biased reporting? I don’t think so – unless they’re your friends, that is.
The second I posted my story, I looked at twitter and saw that Ed Yong had posted a similar story, gonorrhea and all. But it wasn’t the same story – and that’s because, in the end, blogs do have a win over more mainstream journalism. I mentioned word count on twitter, but really it’s the story format.
If you’re trying to directly communicate basic information about a new science article, you’ve gotta do it at the beginning — or at least that’s the current practice. And I do understand the reasoning behind it: if people don’t have time to read the whole article, at least the kernel is provided up front so they can leave learning something. It also is a good way to get the news out fast; the beginning is the hardest part to write, and by having a formula, most of the thinking is already one: summary of research, implications, outside source quote. Then you can zoom out and back up.
But because of that, I can’t write the science like a story, or not with the same flow. I can’t start out with an anecdote about how the scientists started studying the topic, a funny quote, a philosophical anecdote, or anything else. Scientific research IS a story – as I’ve written elsewhere – a story of how the research progressed, the interests of a person who happens to be a scientist. The best science writing plants curiosity and leads the reader to ask the same questions the scientists did.
But the news model persists because it’s modelled off of other fields, or sets an easy demarcation of what is worth covering. “It’s new so we’d better write about it.”
I do believe that journalism and the media are changing for the better, and that a lot of it has to do with blogs and writing online. Freedom of space and form lead to more interesting stories, better stories. Right now science journalism tries to frame science to make it a story — let’s reveal this bias, or let’s talk about how xyz new finding has changed the world.
But science doesn’t need framing; it already is a story.
Even if right now I’m constrained by format, I’m not shaken – I still want to continue in science journalism more than anything. Not only for the joy of it – but if you want to make change, you have to play the game for a little, as I learned from SLC Punk. (Has this film guided anyone else’s worldview? Please say I’m not alone!) Maybe one day I’ll edit a magazine and I’ll throw away the formula and get to test whether it’s successful. But I need to get there first.
…in my dreams right? Didn’t I say I’m an idealist?
Anyway, enough for tonight. Here are the articles I’ve written for The Scientist so far if you want to check them out.
- 3rd Feb 2011 – New mosquito identified: on a group of mosquitoes that hasn’t been targeted – or even identified – in the fight again malaria in sub-saharan africa
- 10th Feb 2011 – Cellular chaos fights infection: why disrupting RNA degradation could create antibiotics
- 14th Feb 2011 – The mouse is not enough: early development differs between mice, the standard model organism, and cows, further encouraging scientists to study multiple model organisms
- 16th Feb 2011 – Contaminated genomes: human sequences have been found in over 20% of sequenced non-primate genomes in databases
(I know – I still can’t write a title for my life. Sue me!)
For most people, the new year is a time to look forward and think about how they will change their behavior in the coming year. I certainly have goals for my own future – but I tend far more towards reflection than anything else. So here, instead, I’ll present something that I’ve learned in the past year.
It all started with a blog post, obviously. In March, I wrote about invasive salamanders and, with a traditional view of invasive species replacing the human-defined idea of “nature,” I ended the post, rather naively, with the line, “How can we save our planet?” I got in a comment argument with Matt Chew about conservation, nativeness, and evolution, and while I still believe much of what I argued, the line of thinking in that thread (including my own obtuseness) set off a chain reaction that altered the way I viewed the world.
From then on, much of the focus of my mind has centered around the place of humans in the natural world. I started shifting my view from a standard anthropocentric view to one in which we are just animals, arguing with myself and friends about whether we really are special, if our consciousness really elevates us above nature, and the factors in society that makes us believe we are the pinnacle of evolution.
For a concrete example, I have since had many conversations with my philosopher queen bestie, Erinrose, about altruism and selfishness. I argued that we are all inherently selfish – that we feel warm and fuzzy when we help someone else because a gene that makes us feel that way has been preserved. And it would only be preserved if it helped individuals survive, passing the gene along the line.
But despite these arguments I make to myself, there is one kicker that always fails my logic: my little brother, Jonah.
Jonah has Fragile X syndrome, a genetic mutation that can cause autism. Jonah himself is not autistic; my mom’s analogy is that he is the opposite of autistic. While a common feature of autism is an inability to detect emotion or differentiate between faces, Jonah has the opposite problem. The world around him is so overwhelming that he cannot help but cower in the presence of most types of stimulation. When he is excited about something, such as when we were riding the trolley around Memphis yesterday, he makes a lot of non-lingual noise as if to block out some of the stimulation and excitement from his joy of the ride. The overstimulation spreads to his limbs, flapping and flopping as if he cannot hold all the energy inside of him but has to do something with it. Let’s just say he draws a lot of attention to himself because he is so overwhelmed with emotion.
The night before the trip, my family sat down to watch some old home movies that my dad had transferred to DVD. (Most of the memories had, unfortunately, been taped over by my middle school-aged brother and his friends recording their slumber parties. THANKS A TON, JACOB!) Part of the film we saw was of three-year old Jonah working with his speech therapist. He could not pronounce vowels at this point, but, working as hard as he could, would spit out the first consonant of a word, causing the therapist to erupt in applause. This was after his teachers at school told us that he would never talk. Now you can’t shut him up.
When my mom found out about his diagnosis, she wept because, as a book-lover with a PhD in english literature, she could not bear the thought that he would never find joy in reading. But on our trip, he could not stop reading. We went on a hike in old growth forest in Mississippi. While he usually is strictly the leader on hikes, he was lagging hundreds of feet behind us, nose buried in a book, unable to keep up because he was so absorbed.
Here he is, evolutionarily some useless runt who cannot take care of himself, who takes up far more resources than a normal person, for whom we were told life would be non-verbal and institutionalized. In nature red in tooth and claw, he would be dead, with the 3 of us other siblings competing successfully for his resources and not worth the parental investment of my darling progenitors. But look at him now! I think he’s even smarter than we suspect. He makes friends wherever he goes. Sure, he still can’t tell a joke for his life, but our human society has allowed for him to survive and grow despite his relative incompetence.
I care about him more than anyone or anything in the world. If you know me, you know this to be pure fact. When I was fuming in the backseat of my parents’ van yesterday because the airline had lost my luggage, I watched him engaging my family about the trip and I started crying just to behold him. I would do absolutely anything for him. The worst thing you could do to me is to take him from me. (Oh lordy, I’m crying again.)
This fact makes zero sense biologically. Sure, he has half of my genes so I have an investment in his survival. But, let’s face it – he’s unlikely to ever pass those genes on. Nonetheless, I would give up 100% of my own genetic heritage to allow his 50% to dead end.
Surely this feeling – this “love” or whatever – has evolved to cause me to protect those close to me who, in turn, help me survive, whether they be family, friends, or potential caretakers of my potential children. Maybe I feel so strongly because, while he is my brother, I also feel like he is my son in many ways. I’m eight years older than him, but emotionally and developmentally the gap is much wider. So maybe I have double the chemical reactions going off when I look at him, part fraternal and part maternal. Or maybe it’s just a mistake in the biological machinery that my knowledge and logic cannot penetrate.
Whatever the reason, he is the stymie in my thinking. He is the puzzle piece that doesn’t fit into my newly-acquired worldview of people as inherently selfish machines. We probably all have such a piece in our lives, something that escapes purely biological explanation or logic.
So this new year, think back on the people in your life that you care for, defying biological explanation. At this point in my thinking, this is what makes us human.
And when Jonah will inevitably force the entire party to raise a toast to the new year as they gaze upon him with pure love, he’ll be toasting a bit to you.
Happy New Year.
PS: Don’t cry, Mom!!
UPDATE: She cried.
Throughout this arsenic-life NASA saga, I’ve been trying to pinpoint the fundamental reasons to explain why this story got out of hand. Why did NASA feel the need to uber-hype this research? Why the rush to publish research even if it may not have been ready?
I’ve drawn the conclusion that the primary cause is the need to be PURPOSEFUL while performing scientific research. For an example, I’ll take the research I currently work on. I study the aging process in yeast cells, focusing on how the cells’ epigenome changes as a cell gets “older.” We do this research under a federally-funded grant, for which our purpose is to study the aging process to help us better understand cancer and other age-related diseases.
But, to be honest, I don’t really care about cancer. I mean, I am someone who is perhaps a bit too comfortable with my mortality, but even beyond that: I actually just think the idea of different proteins and other factors manipulating what sections of DNA are translated and expressed is fascinating. I want to understand this process better – what proteins do what? how is this different in different cell types? how did this system evolve? – and this “aging grant” is really just an excuse for me to do so.
I doubt I’m alone here. I think a lot of scientists are more interested in uncovering the various processes, not for the good of mankind, but simply because we want to understand. (Correct me if I’m wrong, scientists.) I’d be happy to cure cancer along the way if I can, but in terms of my own goals and what is possible during my brief stint in this field, I just want to understand this system a little bit better than when I started.
Science wasn’t always done with a purpose. Think about Charles Darwin. Sure, he was interested in natural history, but he was on the Beagle to provide friendship to the captain. Along the way, he collected a bunch of samples of mockingbirds and finches and other organisms, and it wasn’t till decades later that he put the pieces together and formulated his theory of selection of the fittest. He didn’t collect specimens on his travels for any real purpose, but used the data he collected to draw conclusions later.
Of course, back then science was primarily done by rich men with too much time on their hands. Now science is the forefront of innovation and progress; we need more people than bored rich men to be studying it and, hell, anyone should get a chance to do so! But with greater knowledge and technology, we need more money. And since I’m not a rich bored man, I don’t have any money.
That’s where the government comes in: grants to fund research. But since it is taxpayers that are funding this research, it should have goals that will benefit those taxpayers. Thus I study aging and cancer. And these grants do keep us on task. If I find a cool mutation that alters the epigenome of my yeastie beasties and it’s not related to the aging process, I will not be following up on that project.
I go back and forth on whether this is a good thing. On the one hand, it keeps us accountable to the government and taxpayers, who give us our funding. But on the other hand, does research for a purpose help us really advance in biology, help us better understand how life works?
One of my bosses, a great scientist, doctor and philosopher king, recently emailed this quote to our lab from Carol Greider, a recent Nobel Prize winner for her work on the discovery of the aging-related enzyme telomerase:
“The quiet beginnings of telomerase research emphasize the importance of basic, curiosity-driven research. At the time that it is conducted, such research has no apparent practical applications. Our understanding of the way the world works is fragmentary and incomplete, which means that progress does not occur in a simple, direct and linear manner. It is important to connect the unconnected, to make leaps and to take risks, and to have fun talking and playing with ideas that might at first seem outlandish.”
This idea burns me to my very core. Purpose-based science assumes a certain knowledge of the systems we’re studying. But, let’s face it: we still have so much to learn. We’re all still flailing toddlers, trying to find a surface to hoist ourselves upon so that we can actually get somewhere. While scientists are often conceived to be smart and have all the answers, we actually don’t have many. The more you know, the more you know that you don’t know anything at all.
But instead of being allowed to play, to follow up on work because it’s exciting, to take risks, we have to make sure we stay within the limits of our funding and, thus, our purpose. Because “playing” or studying something because we think it’s AWESOME doesn’t provide evidence of “progress.”
I could be entirely wrong: maybe the old adage that progress is made in leaps and bounds (as opposed to baby steps, I suppose) is farcical. Maybe I only believe this because my human soul that thrives on chaos is drawn to it.
Either way: the purpose of research is overemphasized. When I read papers, I am interested in knowing how their discovery fits into “practical knowledge” (“There is hardly anything known about X disease, BUT WE FOUND SOMETHING!”), but more than that, I’m interested in how it fits in with the current model of whatever system they are studying. But that rarely gets as much attention in papers.
And this idea of “purpose” is why science in the media is so often overhyped. News articles often take a definitive stance on how the new study has contributed to the public good. Maybe it’s “eating blueberries will preserve your memory” or “sleeping 8 hours will make you attractive.” This makes the science easy to digest, sure, but it also paints an incomplete picture. These studies are just tiny pieces in a puzzle that scientists will continue to work on for decades. It’s pure hubris to believe that non-scientists cannot understand the scientific process – that they cannot understand that it takes incremental steps. But, nonetheless, if your research cannot be easily hyped, no one will hear about it, so you have to serve a purpose.
So with NASA’s arsenic-based life. The current model, both in funding and the media, of requiring purpose to justify research forced NASA to claim a greater purpose for its discovery: “an astrobiology finding that will impact the search for evidence of extraterrestrial life.”
To give both NASA and the researchers the benefit of the doubt, let’s just say they found this cool bug and wanted to share the news to get help in studying it, as author Oremland suggested. They submitted the paper to officially get the word out. But then they needed to find a “good reason” to have been studying arsenic microbes and NASA decided this was a good opportunity to reinvigorate its reputation of performing “useful science” so called a press conference. You know where it goes from here.
All that is pure speculation – but it probably isn’t too far from the truth. Maybe I’m being too kind, but I really doubt that the researchers or NASA had any ill-intentions. They simply lost control, and the following shitstorm took off.
We can scoff at them all we like: “an astrobiology finding that will impact the search for evidence of extraterrestrial life, my ass!” But it’s really not so different from my lab publishing a paper with the headline, “KEY FACTOR IN CELL AGING UNCOVERED” when, really, we just discovered a factor, and we don’t even know if it’s key.
The idea of “useful science” also dampens my feelings about science: SCIENCE IS COOL! Longing to pry up the corners of current knowledge isn’t enough: we can’t just look, but have to reveal a direct outcome. But if we don’t allow ourselves even to look because of various purpose-based limitations, we could be missing out on something FUCKING AWESOME!
I’m just rambling now – and am very interested in hearing your thoughts on this.
- Does purpose-driven science lead to better science or more innovation?
- Are there ways of judging research as worthy (e.g. for funding purposes) without having to provide a direct purpose?
- How should the media change its model for covering stories? Should every study that comes out get attention, or should we wait for more details and provide more review-like coverage?
- Would larger, field-based studies dampen competition? Would this help or hurt scientific progress?
Etc. etc. If you made it this far, thank you, xox, Hannah.
Many of you may be familiar with Ed Yong’s post on the Origin of Science Writers, in which he invited writers to post their stories, their travels and travails to get to their current status. (There are over 100 comments at this point.) As I read through the contributions, I realized that something was missing: young or new science writers (with one or two exceptions).
Although encouraged by a seasoned blogger to contribute myself, I felt uncomfortable with the idea. After all, almost all the other writers have higher degrees, have been writing for many years, have published books, etc. Who am I to add myself to the list? I, a mere 23-year old with her bachelor’s degree, a science writer by self-definition more than anything else – do I dare to add myself to their ranks?
The science writing world is changing, and not just because of the ScienceBlogs exodus (Bora’s must-read farewell here). We no longer need credentials to write about science: I can just sign up for a wordpress page and do it! I can risk irrevocable embarrassment and failure on the internet, dooming my dreams of becoming a “real” science writer!
Joking aside, I can see why some people would be hesitant about the emergence of younger, less-experienced science writers on the scene. I don’t know everything about science. I haven’t received my grad school drilling in identifying faulty methods. I haven’t been trained in journalism or ethics. So I care a lot about science and education – does that alone make me qualified to spout off on various topics that I’ve only learned about in the past week?
The potential problem with inexperienced writers is a greater likelihood of making mistakes. I admittedly use this blog as a learning tool for myself. It’s an incentive to read and do research, and then regurgitate it in a fluid way so that I can get a sense of how the research fits together and, in the process, make it useful to other people. While some of my recently graduated friends comment on how their learning has dropped in this year since college, I would say that I’ve actually learned more, in great part due to this blog.
My awareness of my relative inexperience and thus potential for spreading misinformation makes me work really hard to not blather on about things I don’t know anything about. This is one of the reasons I can’t write a blog post every day (or week): for every post I write, I first fact-check, read review articles, and generally make sure I know what I’m talking about. My lack of expertise forces me to do my research well (resulting in mini-epic blog posts). This also helps me toward my goal of creating posts that provide a lot of background, so that I’m providing more than just a small piece of the puzzle when I write about a topic.
But mistakes happen to everyone – not just inexperienced writers – and the internet community should respond to error in a constructive way. Several times I have been torn up in the comments by other scientists (sometimes with unfounded anger) in a way that doesn’t help correct an error, but simply to make me feel like an idiot and doubt myself. That doesn’t do good for anyone: it doesn’t provide a correction, makes me want to disappear, and only serves to make the commenter feel good about her/himself. (As if showing intellectual dominance through mockery should make anyone feel good… bullies.) Mistakes should be corrected through polite questioning and suggestion, increasing information quality without discouraging the writer.
But these potential mistakes don’t mean that we young bloggers don’t belong. We are kids who have normal jobs. We don’t have time scheduled into our workday to read papers, but do it when we get home instead of going out drinking. Our worldviews are not yet jaded by academia. And I think this shines through in our writing – excitement, a certain humbleness, an ability to admit that we don’t know everything.
Well, now I’ve blogged about blogging. If that doesn’t make me a science writer, I don’t know what does.
And with that: several weeks ago Bora (aka the blogfather) of A Blog Around the Clock tagged me in the Blogging with Substance meme, and I’d like to dedicate mine to a few young bloggers that are doing really great work
1. Sum up your blogging motivation, philosophy and experience in exactly 10 words.
That’s a hard one – 10 words is incredibly restrictive. I guess I’ll write a haiku!
Teaching and learning;
Never limit oneself;
Share always the cool
2. Pass it on to 10 other bloggers with substance
I’m going to tag other young bloggers with substance – just because we don’t have PhD’s doesn’t mean we don’t have something to say!
I write this post going into science fiction as a fan, but also unaware of how most scientists think about it. I can imagine two central viewpoints: (1) scientists who enjoy it (like myself), simultaneously as entertainment and a bit of critical thinking and (2) scientists who dislike it due to its tendency to portray “evil scientists” and/or science and technology gone awry, destroying the world.
I didn’t really grow up reading science fiction. Sure, I was (and am) completely obsessed with some fantasy novels (e.g. Lord of the Rings) but never made the leap to becoming a true sci-fi nerd. It wasn’t until I started studying science more fully that I developed an interest in speculative science fiction. Many of the stories do deal with technology taking over civilization – but embedded within this framework is a great deal of excitement, along with some deserved anxiety.
The best way for me to explain these conflicting emotions is with an example of something that happened to me in the past few weeks. We are slowly inching closer to developing lab-produced organs, which would be incredibly beneficial for a lot of obvious reasons. Just this month there have been developments toward mass-produced red blood cells, as well as bioartificial lungs. Eerily, I read about these discoveries as I was tearing my way through Margaret Atwood’s Oryx and Crake, a speculative fiction novel about a bio-engineered future, including “pigoons” (pig/balloon) that have grown to massive sizes in order to grow 6 kidneys at a time for organ harvest, and “ChickieNobs,” a fast food product made from transgenic chickens that have no brains or beaks and grow 8 chicken breasts at once. While reading, I simultaneously was in wonderment about how we could be reaching the ability to actually engineer these creatures, but obviously nervous about the implications described in the novel. (No spoilers here!)
Some scientists might write this kind of anxious thinking off as trash. “We’re trying to develop organs to save lives – we don’t need a bunch of crazies trying to stop us in order to avoid a hypothetical bioengineering apocalypse!” But scientists are born and raised to be skeptical – and that’s all that much of this writing is. Being skeptical about the pure goodness of scientific advance.
But more importantly, science fiction is one of the ways that non-scientists absorb science. Oryx and Crake is a national bestseller, suggesting that millions of people have read her tale of bioengineering gone wrong. While we should assume that the public knows that this is in fact fiction and doesn’t take it entirely seriously, these stories do raise questions about the potential misuses of science that might not be as prevalent otherwise. I believe that scientists have a duty to communicate with the public (and not all agree with me on this). By knowing where non-scientists are coming from, scientists can better address some of the potential issues that might be raised by their achievements.
Sci-fi also provides a venue for discerning how our ways of thinking about science have developed historically. One of my favorite time periods for sci-fi is the 1950s: it was a time when just enough was known to speculate wildly, but not enough to fully disregard these speculations. After all, Watson and Crick did not discover the DNA structure until 1953! Thus you have the birth of many of our superheroes, variously mutated by “cosmic rays” or radiation, altering their molecular structures and giving them superpowers. We had just enough pieces to wonder, but not enough to know the full picture.
And sometimes the stories told ended up being truths nowadays. Reading stories that feature scientific dreams of these writers, and now knowing that they’ve come true, can be heart-wrenching. In one of my favorite short stories, “The End of the Beginning” in R is for Rocket, Ray Bradbury describes a couple gripping their seats with excitement and nervousness as their son boards a shuttle – the first shuttle to land on the moon. This collection was written in 1965, 4 years before Apollo 11 landed on the moon. Bradbury’s description is incredible:
All I know is it’s really the end of the beginning. The Stone Age, Bronze Age, Iron Age; from now on we’ll lump all those together under one big name for when we walked on Earth… Millions of years we fought gravity. When we were amoebas and fish we struggled to get out of the sea without gravity crushing us. Once safe on the shore we fought to stand upright without gravity breaking our new invention, the spine, tried to walk without stumbling, run without falling. A billion years Gravity kept us home… That’s what’s so really big about tonight … it’s the end of old man Gravity and the age we’ll remember him by, once and for all.
Gives you shivers, eh? Of course, this day has come and gone in real time. We are still constrained by gravity, we haven’t set foot on a planet beyond the moon. But these science fiction stories can bring us back to that time of wonderment, help us to experience a feeling we missed: the great excitement of space potentially conquered. And although it didn’t happen quite the way Bradbury described it, we can pretend for at least a little while.
Science is about that excitement. About that drive to discovery, about idealism and hope. It’s easy to forget that, working away at my lab bench, pipetting DNA into tubes. Now we know a little more about science – enough that we no longer dream of mutated superheroes. But we still dream about the day when we’ll make our big discovery, solve our own scientific problem.
Science fiction can remind us of this wonderment and hope. But it also sends us a warning – to think about the potential implications of our findings, beyond our idealistic dreams. While those implications might not be as exciting as a science fiction novel, they exist, and scientists should be aware of them.
With that, I’ll leave you this quote from David Brin from Nature‘s series of interviews with science writers this past winter.
Science fiction is badly named — it should have been called speculative history… Whether you are in a parallel reality or exploring the future, it is all about the implications of change on human lives. The fundamental premise of sci-fi is not spaceships and lasers — it’s that children can learn from the mistakes of their parents.
From Carl Woese’s 2004 piece, “A New Biology for a New Century:”
A heavy price was paid for molecular biology’s obsession with metaphysical reductionism. It stripped the organism from its environment; separated it from its history, from the evolutionary flow; and shredded it into parts to the extent that a sense of the whole—the whole cell, the whole multicellular organism, the biosphere—was effectively gone. Darwin saw biology as a “tangled bank,” with all its aspects interconnected. Our task now is to resynthesize biology; put the organism back into its environment; connect it again to its evolutionary past; and let us feel that complex flow that is organism, evolution, and environment united. The time has come for biology to enter the nonlinear world.
More on this article to come in the next week.
This is essentially the reason I feared molecular biology for most of my life. I felt like it removed organisms from their place in the web and analyzed their parts separately, with no connection to how they all worked together except on a very small scale. The purpose of this mechanistic reduction seemed to be a way to apply nature to help ourselves, instead of getting the big picture of things, an understanding of the world, the patterns that make up life. The former I saw as selfish, and the latter dignified and learned (or something like that).
Working in molecular biology, I’m learning that things aren’t so black and white. Most molecular biologists are interested in these larger-picture, nearly philosophical questions, but are searching for their answers on the small-scale (looking for miniature models of larger truths). Or, on the other hand, they are interested in curing cancer or other diseases. As a Jewish cynic, I clearly hate my own species so see this as a waste of time — but I really shouldn’t blame scientists for wanting to help other people.
Or they are just looking for a way to spend their time, and there are a lot of jobs in science.
I would recommend reading the whole piece if you have an interest in reflecting upon the development of science in the 20th century. Or if you’re a human living in the 21st century. For real real.