Wednesday, May 5, 2010

Free to Choose

Today's New York Times features a bizarre op-ed by Charles Murray of the American Enterprise Institute, a think tank that describes its scholars as "committed to expanding liberty, increasing individual opportunity, and strengthening free enterprise."

Murray summarizes the results of head-to-head comparison of standardized-test performance for students in public and charter schools. The charter-school students, he admits, "generally had 'achievement growth rates that are comparable' to similar Milwaukee public-school students. This is just one of several evaluations of school choice programs that have failed to show major improvements in test scores, but the size and age of the Milwaukee program, combined with the rigor of the study, make these results hard to explain away."

"So let's not try to explain them away," Murray says.

This makes good sense. This kind of experiment has been done many times, as described by Diane Ravitch in her thought-provoking 2010 book, The Death and Life of the Great American School System: How Testing and Choice Are Undermining Education. The results are clear in their lack of clarity: some charter schools are excellent, others are terrible. The data don't support the vision that non-public schools are automatically superior, although some no doubt are. In addition, the data show no indication that public schools faced with competition respond by cleaning up their act, another frequent argument for school choice.

So do these conclusions shake the confidence of a school-choice advocate like Murray? Hardly: "Why not instead finally acknowledge that standardized test scores are a terrible way to decide whether one school is better than another?" he says.

Murray is correct, of course, that tests scores have serious flaws. Ravitch, who admits that she was once a strong proponent of both choice and testing, spends much of her book describing the problems with standardized tests. Especially troublesome are the tests that states devise to show that they are meeting the goals of the "No Child Left Behind" act. Ravitch laments both the limited range of skills being tested--essentially basic math and reading--as well as the distortions that inevitably occur when tools meant to monitor progress start to be used to enforce it.

Ravitch forcefully argues that school improvement is a hard slog, not achieved by silver bullets like charter schools or by extensive data collection like that promoted by the Obama administration's "Race to the Top." Instead of statistical analyses modeled on business practices, she advocates a rigorous (voluntary) national curriculum and on-the-ground assessments by professional educators, not business managers.

Murray acknowledges the failures of previous silver bullets: "whether the reform in question is vouchers, charter schools, increased school accountability, smaller class sizes, better pay for all teachers, bonuses for good teachers, firing of bad teachers — measured by changes in test scores, each has failed to live up to its hype." But he concludes is that the problem lies with testing, and that choice is still a social good, because it allows parents to choose schools whose teaching styles the parents find appropriate.

It will be interesting to see whether Murray's fellow school-choice advocates follow his recommendation to admit that there is no measurable benefit of charter schools, but policy should support them anyway on ideological grounds. Somehow I doubt it.


Monday, April 5, 2010

Changing the Rules

Is the Large Hadron Collider a time machine?

Although I usually like Dennis Overbye's physics writing for the New York Times, I thought he misfired in answering this question yesterday, in the general-audience "Week in Review" section.

In a Q&A entitled A Primer on the Great Proton Smashup that discussed the scientific ideas that underlie research at the LHC, Overbye addressed the question:

"What does it mean to say that the collider will allow physicists to go back to the Big Bang? Is the collider a time machine?"

It may seem silly, but it's actually a good question, since I'd bet a lot of people get confused by the metaphors that writers use to motivate the research. These metaphors get repeated often enough that they are almost cliché, but, as with all metaphors, it's important to know which parts to take seriously and which parts are more poetic or even misleading. Not everybody will know which is which, and it's good to explain it every so often.

Here's Overbye's complete answer:

"Physicists suspect that the laws of physics evolved as the universe cooled from billions or trillions of degrees in the first moments of the Big Bang to superfrigid temperatures today (3 degrees Kelvin) — the way water changes from steam to liquid to ice as temperatures decline. As the universe cooled, physicists suspect, everything became more complicated. Particles and forces once indistinguishable developed their own identities, the way Spanish, French and Italian diverged from the original Latin.

By crashing together subatomic particles — protons — physicists create little fireballs that revisit the conditions of these earlier times and see what might have gone on back then, sort of like the scientists in Jurassic Park reincarnating dinosaurs."

I'll discuss in a moment what I think Overbye means by "the laws of physics evolved," but this notion is awfully subtle for a general reader. More importantly, it completely undercuts the whole thrust of the question: physicists believe they are learning about the early universe in high-energy particle collisions precisely because the laws of physics are the same. If the laws are the same, we can create the same conditions (mostly temperature) to learn about what might have happened in the early universe. (He eventually does say that.)

The confusion comes because the phrase "the laws of physics" can be mean quite different things.

In the context of LHC, it seems clear to me that we refer to the behavior at the deepest levels of the universe. These rules don't get repealed overnight.

In fact, as I understand the phrase, it refers not to the current human description of events, which changes as we learn more, but to the "truth," which doesn't. Otherwise, it wouldn't make sense to say that we want to learn about the laws of physics from the collider (since we already know the laws, even if they're wrong).

Still, we often say the laws of physics say that something is impossible. In that context, the phrase can only refer to our current understanding of the laws, as best as we can discern them.

In fact, when we talk about the laws of physics we're frequently not talking about the deep levels probed by the LHC. Instead, we're referring to laws that describe the more mundane behavior of objects in our cold everyday reality.

In one sense, these "laws" are just a manifestation of the deeper laws. Describing the world in terms of protons, or nuclei, or atoms, or molecules, or cells, or organs, or organisms, or societies, is often vastly more useful than describing it with quarks or strings.

In some cases, the higher-level description can be mathematically related to the deeper description, for example by "coarse graining" the description to smooth out fine details.

This is the sense in which we can say that the "laws of physics" evolve: when the universe was very hot, the description had to include a lot of ingredients that are no longer important now that the universe is much cooler. We can now accurately describe things using a simplified description that doesn't have to include the messier details. The "laws" are different now.

This is Overbye's answer. But I think it will confuse people, since the goal of the LHC is to learn about the immutable laws, not the simpler descriptions or approximations.

One further, mind-blowing complication. Many cosmologists are exploring the possibility that our universe is just one of an infinite number of universes that formed, like bubbles, out of a larger multiverse. According to this view, the "laws of physics" --perhaps even the dimension of space--may be entirely different in each of these universes.

Even if we will always see the laws of physics as unchanging, they may be not be the same everywhere.



Wednesday, March 31, 2010

Picturing Quantum Mechanics

They say a picture is worth a thousand words. But what if those words are wrong?

Very cool recent experiments demonstrated a chemical reaction between molecules below a millionth of a degree (in Science, subscription required). My latest story for Physical Review Focus describes theoretical modeling of this reaction. We accompanied the story with this picture from the news release issued by the Joint Quantum Institute (a partnership between the National Institute of Standards and Technology and the University of Maryland), where the work was done.

It's a pretty picture, with its superhero color scheme and all, and it satisfies our need to avoid a solid block of text. But although it might not be a bad illustration of a room-temperature chemical reaction, it distorts much of what makes these ultra-low-temperature reactions special.

It's clear in the picture that two diatomic molecules are approaching each other, with dramatic consequences in store. The details of how the artist represents the bonds connecting a potassium and a rubidium atom in each molecule don't bother me too much. It doesn't match either of the customary representations, which are ball-and-stick models and the more accurate space-filling models, but there's no perfect way to represent something that can never be seen with visible light. Of course everyone knows that potassium atoms are green, but we'll let that slide, too.

The really problematic part of this picture is very difficult to avoid: the molecules really aren't anywhere, in the sense the picture conveys.

As first shown by experiments at Bell Labs in 1927, matter acts as waves as well as particles. At temperatures below a millionth of a degree, the relevant wavelength for these molecules is hundreds of nanometers, which is much, much larger than the separation of molecules shown in the experiment. There is no meaning to saying that these molecules are separated by such a short distance. They are simultaneously close and far away.

One way to think about this is to invoke Heisenberg's uncertainty principle. According to this principle, if you know an object's momentum with very high precision, you can't, even in principle, know its position very accurately. For these ultracold molecules, the momentum is almost zero, with very high precision, so you can only know where it is to the nearest hundreds of nanometers.

There's a second problem, too. The picture shows the molecules with particular orientations in space. That may not seem strange, but the molecules in the experiment were prepared in the rotational "ground state," with the lowest possible energy. Like the s-orbitals of electrons in a hydrogen atom, this state is spherically symmetrical. This means that the molecule is equally likely to be pointing in any direction. This isn't the same thing as saying we don't know what direction it's pointing (even though it does). Quantum mechanics says that it has no direction, at least until an experiment requires it to.

So the reacting molecules really aren't at any particular distance from one another, and they don't have any particular orientation relative to each other. That's one of the things that makes this chemical reaction--and the theoretical description of it--so interesting.

But good luck drawing that.


Thursday, March 25, 2010


When I was an undergraduate at MIT in the late 1970s, one of the most impressive people I knew was my roommate Walter.

Walter was a mechanical engineer, and knew an incredible amount of just-plain-practical stuff about how to make things work, from types of stainless steel to the ins and outs of convective cooling.

The rest of us vicariously enjoyed his entry in Woodie Flowers' design completion, the 2.70 contest (named for the number of the design course). The idea was to take a box of stuff--tongue depressors, rubber bands, and so forth--and turn it into a machine that would beat other students in a glorified "king-of-the-hill" like task. This was way before I heard the phrase KISS, "Keep It Simple, Stupid," but we all learned that successful designs took this minimalist idea as a core principle.

This sort of contest has been picked up by others, even bringing high-school teams into the game, for example in the FIRST competitions. It's a great way to inspire imaginative people in ways that go way beyond the ordinary classroom experience. And it's a hands-on demonstration of the creativity that underlies true design.

Walter also worked with Otto Piene, the artist who headed MIT's Center for Advanced Visual Studies. Among other things Walter helped sew enormous inflatable anemones for Piene's Centerbeam project on the mall in Washington, D.C., as well as designing and building beam-steering mechanisms for an early laser show.

I'd forgotten all of this until reporting my latest story for CACM on a new project from MIT, which brings together two teams. One group comes from the Department of Urban Studies and Planning, which is exploring ways that distributed technology can illuminate or improve urban life, such as tracking trash disposal or using cell phones to monitor commuting. The second group is from the "Aero and Astro" department, where researchers have been exploring autonomous vehicles for military and other applications.

The Flyfire project aims to do something even stranger: to use LED-bearing helicopters to make a giant display. Although there may be some practical need for such a display, the most compelling vision for now is to create giant public art projects, of the kind pioneered by Piene and others.

Maybe it will never be particularly useful. Maybe it will never even work. But these grand visions tap into something essential in the human spirit.

Monday, March 22, 2010

Mining Evolution

Can a worm get breast cancer? And how would you know if it did (since it doesn't have breasts)?

Biomedical research has made great use of "disease models": conditions in lab organisms that resemble the human diseases that the researchers really want to learn about. By seeing how a model condition develops and how it responds to drugs or other changes, researchers can make better guesses about what might help people. But finding such disease models usually requires some obvious similarity between the outward manifestations of the disease in humans and the animal subjects.

In the Proceedings of the National Academy of Sciences, a team from the University of Texas in Austin led by Edward Marcotte use the underlying molecular relationships to find connections between disorders with no such obvious relationship. In addition to a worm analog of breast cancer, they found an amazing connection between plant and human disorders. The analysis of plants' failure to respond to gravity led them to human genes related to Waardenburg syndrome. This syndrome includes an odd constellation of syndromes resulting from defects in the development of neural crest cells.

I saw Marcotte speak about this fascinating work at the conference I attended last December in Cambridge, Massachusetts. My writeup should be posted soon by the New York Academy of Sciences.

Biologists have repeatedly found that the networks of interacting molecules are organized into modules. Over the course of evolution, these modules can be re-used, often for purposes quite different from their original function. This much is well known, although seeming the persistence of modules over the vast evolutionary separation of plants and people is very dramatic.

What the Austin team did was to devise a methodology to identify related molecular modules in different species even without relying on similar outward manifestations, or phenotypes. They combed the known molecular networks of different species for modules that had a lot of "orthologous" genes: those that had retained similarity--and similar relationships--through evolution. They call the particular traits associated with these genes "orthologous phenotypes," or "phenologs." "We're identifying ancient systems of genes that predate the split of these organisms, that in each case retain their functional coherence," Marcotte said at the conference.

The importance of this scheme is that many molecular networks are poorly mapped, especially in humans. But if a particular gene is part of the network underlying a phenotype in another species--such as poor response of a plant to gravity--it's a good guess that the corresponding gene may be active in the orthologous phenotype in people. The researchers in fact confirmed many of these predicted relationships. Some of these genes were previously known to relate to disease, while others were new. The researchers created a list of hundreds more that they still hope to check.

These genes could give researchers many potential new targets for drugs or other interventions in diseases. So evolution is not just helping us to understand the biologic world we live in, but helping us devise ways to improve human health.

Friday, March 19, 2010

The Language of Life

Ten years after the announcement of the draft human genome, the world of human health seems in many ways unchanged. But it is changing, in many profound ways, says Francis Collins, who led the government-funded part of the genome project and is now the director of the National Institutes of Health.

Collin's new book, The Language of Life: DNA and the Revolution in Personalized Medicine, aims to help the public to understand the changes so far, and those that are still to come. He covers a wide range of topics, but the guiding theme is the promise of "personalized medicine" that tailors treatment for each individual based on their genetic information.

As he shows in his occasional columns in Parade magazine, Collins is a skilled communicator of complex medical topics, including their ethical and personal dimensions. He steers authoritatively but caringly through challenging topics like race-based medicine. On the pros and cons of genetic screening, for example, he describes the desirability of genetic tests as a product of not just the relative and absolute changes in risk associate with a gene, but the seriousness of the disease and the availability of effective intervention.

I confess that I was worried that Collins might let his well-publicized Christian beliefs color this book (his previous book is called The Language of God). They did not. His beliefs arise a few times, for example in the context of stem cell research, but he deals with serious ethical questions with great respect for different points of view. In addition, as should be expected for any modern biomedical researcher, he repeatedly and matter-of-factly draws important insights from evolution.

On the whole, the writing is accessible to general readers, even as Collins discusses complex scientific topics. On occasion, however, he shows an academic's tolerance for complex, caveat-filled verbiage, as when he writes, "Therefore, at the time of this writing, the effort to utilize genetic analysis to optimize the treatment of depression has not yet reached the point of effective implementation." This stilted language is the exception, but he also slips into occasional jargon that might leave some readers temporarily stranded.

A trickier issue is Collins' frequent use of patient anecdotes to illustrate how genetic information can lead to better decisions. These human stories, drawn from his long research and clinical experience, certainly succeed at Collins' goal of inspiring hope for the potential of personalized medicine, as well as showing clearly what it mean to people. But the succession of optimistic stories begins to seem skewed to draw attention away from structural challenges in American medicine that could seriously undermine this potential. When Collins mentions these issues, it tends to be in careful euphemisms: "A recent study estimated that in the United States each year, more than 2 million hospitalized patients suffer serious adverse drug reactions, with more than 100,000 of those resulting in a fatal outcome."

In a similar vein, Collins describes the successful identifications of gene variants associated with macular degeneration. The fact that similar studies for other diseases have been rather disappointing doesn't seem to bother him much. Perhaps his decades in research, including the identification of the cystic fibrosis gene, have made him confident that these problems too will pass. But he comes across as a very optimistic person.

In spite of my quibbles, I think The Language of Life succeeds well at putting the omnipresent news stories about genetic advances in a useful context of individual medical choices. As a writer who covers these areas of science, I didn't learn an awful lot of new things from the book, but I think most people will, and will enjoy themselves in the process.

Tuesday, March 16, 2010


In many fields of science and engineering, any technical argument must be formulated mathematically if it is to be taken seriously. In contrast, in popular writing--even about science and engineering--including a single equation is a known recipe for getting many readers to click on to the next story. Understanding this discordant reaction to math reveals a lot about how popular and technical writing differ.

Back in 2003, when I was thinking about morphing from a practicing scientist into a science writer, I seriously questioned how I could possible to explain scientific arguments without variables and equations. Without the precision of a mathematical description, I wondered, how could I really know whether readers interpreted ordinary English phrases the way I intended? Moreover, without a algebraic description, how could readers judge how well a model matches observations?

Interestingly, in the years since, I've almost never felt hobbled by not being able to explain things with equations.

A lot of the difference arises from the different goals of journal articles and popular stories, and their very different sources of authority.

In a journal article, the goal is to convince other experts. In other words, the article should ideally be self contained, assembling all the relevant details so that an independent observer can make up their own mind.

In contrast, a popular science story aims merely to describe the conclusions, not prove them. As David Ehrenstein, the editor at Physical Review Focus, once told me, the goal is to present a plausibility argument for the conclusions: to give enough context and explanation that readers can appreciate what's being claimed and who's claiming it.

The last point is also critical: by and large, popular writing gains its authority from the quoted judgments of experts, not directly from the model or observations. Indirectly, of course, this authority comes from the reputation of the writer and the publication (that is, the editors), because they are the ones who decide which commentators are worth quoting. (Of course, those commentators must also respond to emails or phone calls!)

Since the goal is plausibility and a qualitative understanding, the limited precision of ordinary writing is usually good enough to convey the message.

Wednesday, March 10, 2010


The development of a single cell into a complex creature such as you or me is almost miraculous. Early scientists were so baffled by this process that some supposed that the fertilized egg might contain a complete specification of the final organism.

This is just silly.

Still, it has only been in the last few decades that researchers have worked out how a smooth starting pattern in the concentrations of a few molecules develops into a complex, highly structured pattern. These molecules act as transcription factors that modify the activity of genes making dozens of other transcription factors, which spontaneously form patterns that serve as the invisible framework driving all subsequent development. The essence of this understanding is described in the charming 2006 book Coming to Life: How Genes Drive Development by Christiane Nüsslein-Volhard, who shared a Nobel Prize for her germinal work on the development of the fruit fly, Drosophila melanogaster.

A central part of this view is that different regions of the developing embryo are uniquely identified by the particular combination of transcription factor concentrations they contain. Each combination (influenced in part by its neighbors) stimulates the cells in that local region, or "compartment," to develop toward a particular final structure, such as the hindmost edge of a wing. Mutated animals that are missing the genes for particular factors develop characteristic problems, like extra wings or legs sprouting where their antennae should be. Researchers have also learned how to label the molecules with fluorescent dies that directly reveals the invisible patterns of factors that drive later development.

In their book The Plausibility of Life, Marc Kirschner and John Gerhart include developmental compartments as one of the key elements, along with weakly-linked modules and exploratory behavior, of "facilitated variation": the ability of organisms to respond to small genetic changes with large but viable changes in their structure.

Compartmentalization is in some ways similar to exploratory behavior, in which developing body structures such as blood vessels respond to local stimuli such as chemicals emitted by oxygen-starved cells. In both cases, individual cells respond to nearby cues, without needing to refer to some master plan. In the case of compartmentalization, however, both the cues and responses are more general chemical changes, in contrast to the more apparent structural changes seen in exploratory development.

What the two processes have in common is that they allow development that is flexible enough to succeed in diverse situations, for example when nutrients are scarce or the embryo is damaged. This sort of robustness presents a clear evolutionary advantage, since it makes it more likely that a complex organism will grow up and survive to reproduce.

But in addition, robust development lets organisms deal with genetic changes. Although many mutations are fatal, some cause dramatic changes in body organization or other features, while the flexible development process adapts to the new situation. As a result of this facilitated variation, evolution is able to explore a wider variety of strategies and move quickly to new solutions.

For Kirschner and Gerhart, this flexibility is key to understanding the nature and rapidity of evolution. A population can explore the potential advantage of a longer hindlimb, for example, without the need to separately coordinate changes in bone, muscle, blood vessels, nerves, and so forth. Adaptive development takes care of all that.

In fact, the pattern of compartmentalization appears to have been much more stable over the course of evolution than the details of body structure have been. The appearance of compartmentalization and the other features that allow facilitated variation look like the crucial revolutionary events that made rapid evolutionary change possible.

Tuesday, March 9, 2010

Poles Apart

Over the past few months, scientists studying global warming have been rocked by a series of awkward revelations. In November, someone made public more than 1000 emails, many quite damning, from climate researchers at the University of East Anglia (UEA), in what skeptics successfully branded as "Climategate." December and January saw the authoritative Intergovernmental Panel on Climate Change (IPCC) admitting that their published claim that Himalayan Glaciers would disappear by 2035 was improperly sourced and wrong, even as critics pounced on other instances in which the panel violated their own procedures for insuring that science was properly represented in their influential reports. (The IPCC had shared the 2007 Nobel Peace Prize with Al Gore.)

Not surprisingly, different people responded completely differently to these events. Critics see the revelations as confirming that global warming, and especially its human source, is just a hoax. Others counter that the revelations that scientists are fallible human beings do not in the least affect the overwhelming evidence for man-made climate change. Unsurprisingly, both extremes find confirmation for what they already believed.

They are also both wrong.

First let's examine the hoax theory. Although we don't know who released the emails, they look to have been culled from more than a decade of exchanges and chosen to best incriminate the mainstream climate researchers, both those at UEA and their correspondents. What is striking is that the emails do not
reveal any serious evidence that the researchers are fabricating the global temperature rise.

To be sure, the emails show serious misbehavior, including a request from UEA researcher Phil Jones that others delete emails to avoid a freedom-of-information inquiry. Other emails suggest that the researchers hoped to tweak the IPCC process to exclude legitimate, peer-reviewed papers that they didn't regard as credible.

But if the emails showed evidence of a true hoax, the critics poring over them haven't found it--and not for lack of trying. Instead, they have highlighted examples of ambiguous or unfortunate wording, which are conveniently picked up by the likes of Sarah Palin. But although Palin may not know any better, the critics understand that when Jones referred in 1999 to Penn State researcher Michael Mann's "trick" to "hide the decline," he was describing a technique to de-emphasize the inconvenient truth that tree-ring data don't match measured temperatures, which were clearly rising.

This is a serious matter: if the tree-ring data are not good proxies for temperature in cases where we know the temperature, how can we trust them in cases where we don't know the temperature? But as far as I know, the published papers acknowledge this manipulation, which is an acceptable way to deal with omissions of questionable data. Nothing is being hidden. In fact, this whole issue was addressed by the U.S. National Academy of Sciences in 1996, who confirmed the unprecedented nature of the current warming trend.

The critics know this. Their use of this and similar examples shows that they are less interested in the overall truth than in scoring points and discrediting mainstream climate science.

But the disingenuous actions of the critics do not excuse the behavior of the mainstream scientists.

It is tricky to interpret what must have been seen as private correspondence between colleagues. Nonetheless, the emails seem to show that the East Anglia scientists did not really trust the processes of science, or at least the political decisions based on the science. The researchers in the emails are acutely aware of the political context of their results, and present the data to make their case. For example, UEA tree-ring expert Keith Briffa says "I know there is pressure to present a nice tidy story as regards 'apparent unprecedented warming in a thousand years or more in the proxy data' but in reality the situation is not quite so simple." Briffa, whose data contains the "decline," commendably goes on to argue for a more honest and nuanced description of the data.

It's not unusual for researchers to choose data to tell a particular story. And there really is no evidence that the researchers suspected the story they were telling--of unprecedented, industrially-caused warming--was wrong. But it is clear that the handful of teams around the world who evaluate historical climate trends displayed their data to support a clear end goal. Subsequent revelations show that this mindset also affected the choice of references in the IPCC report, especially with regard to the impacts of climate changes.

One of my correspondents, a biologist who uses complex biological models, likens the climate-science consensus to Lysenkoism--the Soviet era anti-Darwinist biological agenda. I take this person to mean that dissent from the reigning paradigm is not accepted in mainstream scientific circles: anyone questioning the "consensus" is quickly branded a "denier." This is not the way to do science, especially for something as important as climate change. And what it means is that the "overwhelming consensus" is not as convincing as it seemed a few months ago.

But it doesn't mean that the consensus is wrong.


Monday, March 1, 2010

Survival of the Most Entangled

Most people think quantum-mechanics affects only atomic-sized objects, but many experiments have shown that it applies over many miles. In an experiment last year, for example, researchers sent pairs of light particles, or photons, between two of the Canary Islands off the coast of Africa, a distance of 144 kilometers.

This picture dimly shows La Palma, where the photons started, as seen from Tenerife, where they were detected.

Although the Austrian team, led by Anton Zeilinger, only detected one in a million of the pairs they sent, they found that these pairs retained the critical property of entanglement. This means that results of measurements on the two particles are related in ways that can't be explained if each particle responds to the measurement independently: the pair acts like a single quantum-mechanical entity. Such pairs can be used to securely transmit information over long distances.

My latest story at Physical Review Focus describes a theoretical analysis of this experiment by researchers in Ukraine and Germany. They suggest that the pairs that survive the half-millisecond trip must have had unusually smooth sailing through the turbulent atmosphere, and that this is part of the reason why they are still entangled. (The motion of the air, like the shimmering of a mirage in the desert, generally disrupts the light transmission, but there are short moments of clarity.) This is a pretty comprehensible idea, so in the story I was able to sidestep a lot of interesting issues about how the entanglement was measured and what it means.

For example, the experimenters delayed one photon by about 50 ns by passing it through a fiber before sending it after the other one. That's not a long time, so the atmospheric conditions probably looked pretty similar to the two photons. Since they were subjected to much the same conditions, it doesn't seem so surprising that they would remain entangled. In fact, the original experimenters were pretty pleased that it all worked, but clearly they were hoping it might or they wouldn't have gone to the trouble.

Sending the two photons on the same path certainly isn't the most demanding task, either. More impressive would be sending them on different routes to a final destination where they were compared. But sending an entangled pair is good enough for some quantum communication schemes.

What made the paper particularly interesting was the conclusion that the turbulent atmosphere would be better than, say, an optical fiber that had the same average loss, because the fiber's properties wouldn't change with time. Zeilinger expressed pleasant surprise that the rare moments of exceptional clarity would more than make up for the times when the atmosphere was worse than usual. Still, having no turbulence at all (or a very clear fiber) would be even better.

Exploiting quantum mechanics in secure long-distance communication, for example via satellites, looks more realistic than ever.

Thursday, February 25, 2010

Targeting Cancer

Amy Harmon of The New York Times had an excellent three-part series this week called "Target Cancer." She follows one clinician/researcher as he pursues a "targeted" treatment for melanoma, which aims at the protein produced by a gene called B-Raf that is mutated more than half of the time in this skin cancer.

The series does a great job in following the emotional roller-coaster ride of the doctor, and of course his patients. One early targeted drug doesn't work at all, perhaps because it also attacks normal cells and the side effects become intolerable before the dose is high enough to affect the cancer. A new drug seems not to do anything, but then the team decides to wait for the drug company to reformulate it to deliver higher effective doses.

The results are spectacular: the new formulation causes a virtually unheard of remission in the cancer, and raises hopes in formerly hopeless patients and in the doctors. The excitement and the potential are palpable as some patients dare to hope and others can't bear to. But within a few months, the patients are dying again.

The new drug is an example of personalized medicine, since it is effective only for patients with a particular mutation. There are a few other examples of therapy tuned to patients with a particular genetic profile, such as the breast-cancer drug erbitux and the anticoagulant warfarin (Coumadin).

But this treatment is actually for cancers with a particular mutation--a mutation the normal cells of the patient don't have. Cancers cells generally have more and more of mutations as the disease progresses, because it disrupts the normal quality-control mechanisms in the cell. A study announced last week (registration required) showed that the specific pattern of mutations could be used to monitor the ebb and flow during treatment, although it doesn't look practical yet for tailoring treatment.

Unfortunately, as described in this series, even when a drug targets a mutation in a particular patient's cancer, cancers often develop alternate routes to proliferation. Harmon alludes to one approach to this problem: a multi-pronged "cocktail" that attacks many possible mutations at once. Such cocktails are standard, for example, in treating HIV/AIDS.

Without vilifying the drug companies, she explains some challenges for these profit-oriented companies in pursuing this approach. In particular, even if the cocktail may ultimately be more effective, getting approval might delay or threaten their profits from the drug they have in hand, even if it only extends life for a few months. This is especially true if other drugs in the cocktail are owned by competing companies. In any case, the difficulties in testing multiple drugs make it much harder to know what is effective and what side effects may appear.

The idea of analyzing molecular networks and attacking them at many points simultaneously is a recurring theme in systems biology. But sometimes it seems very far in the future.

Monday, February 22, 2010

Stoner Magnetism

My latest story at Physical Review Focus describes experimental evidence that a missing atom in a chicken-wire-like sheet of carbon can hold a single extra electron.

Theorists have long expected this to be the case, and that unpaired electrons on such vacancies might join up to make an entire single-atom-thick graphene sheet magnetic at relatively high temperatures. Many researchers are excited about the rapid and unusual motion of electrons in these sheets, and IBM researchers recently described a graphene field-effect transistor, grown on silicon carbide, whose expected frequency (fT) exceeds 100GHz. If the layers are also magnetic at normal temperatures, this material could be fun and potentially practical for spintronics, which manipulates both the charge and magnetic properties of electrons.

The actually experiment didn't directly show magnetism, though, just a state that looked like it should hold only one electron. The researchers used scanning-tunneling microscopy to look at a clean, cold graphite surface, which includes many stacked graphene-like layers. In fact, the authors suggest that magnetism may exist in graphite, but not in graphene, because in the latter the effects of two equivalent carbon positions for a vacancy may cancel each other out.

It turns out to be a little bit tricky to explain the connection between local spins, which naturally carry a magnetic moment, and magnetism in a bulk material.

The usual story is straightforward: some types of atoms (or vacancies) naturally have a magnetic moment, "like a tiny bar magnet." Nearby moments exert forces that tend to align their neighbors, either the same way or oppositely. If it's the same, then the moments on many different atoms can all line up to form a net magnetization in a large sample, if the temperature is not so high that they get jostled out of position.

This description is correct--but only for some magnets.

For other magnets, it's just not accurate to say that the atoms each have magnetic moments that line up with each other. In these so-called "itinerant" magnets, the magnetization comes from the metallic electrons washing over all of the atoms. In this case, preference for one direction or another at a particular atom develops only as a part of the magnetization of the whole sample.

Mathematically, itinerant magnetism takes the form of an instability, in which the energy benefit of aligning the moments of the electrons overcomes the energy cost of doing so. A simple description was developed back in the 1940s by Edmund Stoner at the University of Leeds, and his name is still used to convey the ideas. (I apologize to anyone who expected this post to be about the natural charisma of pot-smokers.)

Of course, the distinction between the "local-moment" and "itinerant" magnetism is often somewhat fuzzy, and for the purpose of explanation to the general public it may not seem that important. But to people who understand the issues, getting it wrong is unforgivable, as I found out to my chagrin after using the above simple local picture in my Focus story on the 2007 Physics Nobel on Giant Magnetoresistance (GMR).

GMR read heads in disc drives can be seen as a simple type of spintronics device. In more sophisticated devices that people dream about, electrons will carry their magnetization to new locations, so it's important to be clear on the nature of that magnetism.

Wednesday, February 17, 2010


What a piece of work is a man! For that matter, what an awesomely complex apparatus is any large organism, from a dog to dogwood!

But as we learn more about biology, our awe shifts from the intricate cellular arrangements in mature multicellular life to the ways these structures arise during development from simpler (but not simple) rules. Even if we can accept simple examples of self-organization, like the spontaneous arrangement of wind-blown sand into regular dunes, the self-assembly of living creatures seems to be a different scale of miracle. But researchers have repeatedly found that simple rules, in which cells respond to local cues like chemical concentrations and mechanical stresses, suffice to describe how various aspects of our complex bodies develop.

In The Plausibility of Life, Marc Kirschner and John Gerhart describe this rule-based strategy, which they call "exploratory behavior," as a very effective way for organisms to develop dependably in the face of unpredictable changes in their environment. But they go further, stressing that flexible, adaptive development speeds evolution by letting small genetic changes give rise to vastly different--but still viable--organisms. Exploratory behavior is thus a critical component of their concept of "facilitated variation."

As an illustration of rule-based organization, Kirschner and Gerhart review the foraging of ants. Steven Johnson described this and other examples in his thought-provoking 2002 book, Emergence. Simply by following local rules and responding to the scent trails left behind by their predecessors, individual ants join to form major thoroughfares between a food source and their nest. No master planner guides their motions.

Similarly, in a developing animal, some cells may find themselves far from the nearest blood vessel. In response to the lack of oxygen, they secrete chemicals that encourage the growth of new capillaries nearby. And in the brain, the intricate wiring of nerve cells is guided in part by signals that they receive and transmit during certain periods of development.

It really has to be this way. Although it's true--and amazing--that the 558 cells of the roundworm C. elegans take up pre-ordained positions in the final creature, the cells in much bigger creatures like us simply can't all have designated roles in the final organism. For one thing, there's just not enough information in our 20,000 or so genes to tell every cell where to go on some genetic master plan. Instead, each cell has to have a degree of autonomy in dealing with new situations. For example, if one of your legs is stunted early on, the muscles, nerves, blood vessels, and skin will all adapt to its new size, rather than blindly proceeding with some idealized plan. Even in C. elegans, the fixed cellular arrangement mostly results from such adaptive behavior of individual cells.

If you're still not convinced, think of the offspring of a bulldog and a Great Dane, which will have a facial and body structure unlike either of its parents. But we are not even surprised that the blood vessels and muscles will successfully adapt themselves to this completely novel shape.

It makes perfect sense that creatures that use this adaptive process in their development would be more successful during evolution.

But the reverse is also true: this flexibility makes evolutionary innovations much easier. The repurposing of mammalian digits for a dolphin's flipper, a horse's hoof, or a bat's wing is much faster if only a few genes have to change to determine the new shape, and the others adapt in parallel. In concert with modular organization, development that is built on exploratory principles is critical to letting evolution explore radically new architectures in response to small genetic changes.

Wednesday, February 10, 2010

Brain-Machine Interfaces

I have a short news story about exchanging information between machines and people's brains, now online at the Communications of the Association for Computing Machinery. This is a difficult field to capture in a few hundred words. There's a lot of progress, but people are trying a lot of different approaches, and they're not all addressing the same problem.

For example, some people are hoping to provide much needed help to people with disabilities, while others see the opportunity for new user interfaces for games.

Naturally, people will be willing to spend a lot more for the rehabilitation. In addition, recreational use pretty much rules out (I hope!) any approach that requires surgically implanting something in the skull. Even the researchers who are exploring rehabilitation don't yet feel confident exposing people to the risk, because they can't be sure of any benefits. As a result, these studies mostly involve patients who have received implants for other reasons.

If surgery is ruled out, there are fairly few ways to get at what's going on in the brain. With huge, expensive machines, you can do functional MRI, but that doesn't look particularly practical. Both Honda and Hitachi are using infrared monitoring of blood flow, with impressive results. But the best established measurement is EEG, which measures electrical signals with electrodes pasted to the surface of the head.

One up-and-coming technique that I mention in the story is called ECoG, or electrocorticography. Like EEG, it measures the "field potentials" that result from the combined actions of many neurons. However, the electrodes are in an array that is draped over the surface of the brain (yes, under the skull), so the signal is much cleaner.

Finally there are approaches like Braingate that put an array of a few dozen electrodes right into the cortex, where they can monitor the spikes from individual neurons. 60 Minutes did a story a while ago that showed people using this technology to move a computer mouse.

If the implants are to be practical, they will need to be powered and interrogated remotely, not through a bundle of wires snaking through the skull. Many people are exploring wireless interfaces for this purpose, as described by my NYU classmate Prachi Patel in IEEE Spectrum.

Brain-machine interfaces can also run in either direction. My story dealt mostly with trying to tap the output of the brain, for example letting paralyzed people control a wheelchair or computer mouse. But input devices, such as artificial cochleas or retinas, are also proceeding quite rapidly. To my surprise, Rahul Sarpeshkar, who works on both directions, told me the issues are not that different.

My guess would have been that input to the brain can take advantage of the brain's natural plasticity, which will adapt it to a crude incoming signal. To usefully interpret the output of haphazardly placed electrodes, people need to do an awful lot of sophisticated processing of the signal, which can slow things down.

The toughest thing about this sort of story, though, is time. There's a lot of progress, but there's a long way to go. Once the proof of principle is in hand, there's still a lot of hard work to do, some of which may involve major decisions about basic aspects of the system. It's hard to communicate the progress that's being made without getting into a lot of details that are only interesting to specialists.

Even when it lets the blind see or the lame walk, writing about engineering is a hard sell for the general public.

Monday, February 8, 2010

Blinded Science

Medical researchers systematically shield their results from their own biases. Other scientists, not so much.

Medical science includes many types of studies, but the universally accepted "gold standard" is the randomized, controlled, double-blind, prospective, clinical trial. In this kind of study, patients are randomly assigned to either receive the treatment being tested or an ineffective substitute and then watched for their response, according to predefined criteria.

"Blind" means that the patients don't know whether they are getting the "real thing" or not. This is important because some will respond pretty well even to a sugar pill, and a truly effective drug must at least do better than that. This feature may not be so important in other fields, assuming that the rats or superconductors they study don't respond to this "placebo effect."

But the "double" part of "double blind" is also critical: in a proper trial, the doctors, nurses, and the researchers themselves don't know which patients are getting the real treatment until after the results are in. Without this provision, experience shows, they might treat the subjects differently, or evaluate their responses differently, and thus skew the conclusions. The researchers have a lot invested in the outcome, they have expectations, and they are human.

So are other scientists, I'm afraid.

It is surprising that most fields don't expect similar protection against self-deception. Sure, it's not always easy. In my Ph.D. work, for example, I made the samples, measured them, and interpreted the results. Being at the center of all aspects of the research helped keep me engaged and excited in spite of the long hours and modest pay, and was good training. But I also felt the pressure to get more compelling results.

These days, many fields already involve multidisciplinary collaborations that leverage the distinct skills of different specialists. Would it be so hard for the person who prepares a sample to withhold its detailed provenance from the person doing the measurement until the measurement is finished? With some effort, people could even hide information from themselves, for example by randomly labeling samples. No doubt it takes some of the immediate gratification out of the measuring process, but the results would be more trustworthy.

As recent events have made clear, trust is especially important in climate change. In October, Seth Borenstein, a journalist with the Associated Press, gave a series of measurements to four statisticians, without telling them what the data represented. In each case the experts found the same thing. When the data were revealed to be temperature measurements, each expert had found a long-term upward trend in global temperatures, with no convincing sign of the supposed cooling of the last ten years.

But why did it take a journalist to ask the question this way? Shouldn't this sort of self-doubt be built into all science at a structural level, not just assumed as an ethical obligation?

Experimental science will always be a human endeavor, I hope, so the results can never be completely divorced from expectations. But there are ways of making it more trustworthy.

Thursday, February 4, 2010

Fractal Biology

I first learned about the intermediate-dimension objects called fractals in the late 1970s, from Marvin Gardner's wonderful "Mathematical Games" column in Scientific American. One of the cool and compelling things they can do is explain is how highly branched circulatory system, with an effective dimension between two and three, could have an effectively infinite surface area, abutting every cell in the body, while taking up only a fraction of the body volume.

Twenty years later, Geoffrey West and his collaborators used this fractal model to explain the well known "3/4" law of metabolism, in which different organisms' resting metabolic rate varies as the 3/4 power of their body mass. West, an erstwhile theoretical physicist from Los Alamos who recently stepped down as the head of the delightfully eclectic Santa Fe Institute (for which I've done some writing), used similar scaling analyses of things for other aspects of biology as well as resource usage in cities.

Unfortunately, according to Peter Dodds at the University of Vermont, the well known 3/4 law is also wrong. In my latest story for Physical Review Focus, I briefly describe how Dodds uses a model of the branched network to derive an exponent of 2/3.

Interestingly, this 2/3 exponent is precisely what you'd expect from a simple computation of the surface to volume ration of any simple object. A 2/3 law for metabolism was first proposed in the mid 1800s, Dodds said, at "a tobacco factory in France, trying to figure out how much to feed their workers, based on their size. They asked some scientists and they said 'we think this 2/3 rule would make sense.'" Experimental data on dogs seemed to fit this idea.

But later experiments hinted at a slightly higher exponent. "At some point it became more concretely ¾," Dodds said, based on the work of Max Kleiber published in 1932. "He'd measured some things that looked like 0.75 to him. You know, he had nine or ten organisms, and it was easier on a slide rule." At a conference in the 1960s, scientists even voted to make 3/4 the official exponent.

But in the wake of the fractal ideas, Dodds and his collaborators re-examined the data in 2001. "What really amazed me was I went back and looked at the original data and it's not what people thought. People had sort of forgotten about it by that point." Instead, Dodds, found, the data really matched 2/3 better. At the very least, the 3/4 law was not definitive. This doesn't mean that the fractal description is not useful, only that it has a different connection to the metabolic rate.

From C.R. White and R.S. Seymour, Allometric scaling of mammalian metabolism, Journal of Experimental Biology 208, 1611-1619 (2005). BMR is resting metabolic rate. The best fit line has a slope (exponent) of 0.686±0.014 (95% CI), much more consistent with 2/3 than with 3/4.

Other authors have since supported this conclusion, especially if they omit big herbivores like kangaroos, rabbits, and shrews, whose resting metabolism is hard to measure. The real biological data is messy, and perhaps it is silly to expect a simple mathematical law to apply to diverse biological systems. In any case, the difference between the two exponents is modest, amounting to a factor of about 2.5 in metabolism over the range of experimental data in the plot.

Still, some experts, such as the commentators that I interviewed for the Focus story, still think that the 3/4 law is correct. But it seems plausible that many decades of experimental observations have been colored by researchers' expectations. Science remains a human endeavor.

Wednesday, February 3, 2010

Beautiful Data

There's an old joke where a scientist presents "representative data," when everyone knows it's really their best data. Like many jokes, there's a large measure of truth in it. (And like many science jokes, it's not actually funny unless you're a scientist.)

Audiences, whether in person or in print, like a good story. And it's best when data tell a story on their own, without further words of explanation. Scientists can handle tables of numbers better than most people, but, even for them, pictures tell the most compelling stories.

In fact, many experienced researchers begin preparing a new manuscript by deciding what figures to include. In part this is because figures take a lot of space in journals, but in addition many readers will go from the title straight to the figures, bypassing even the short abstract. If the pictures don't tell a good story, readers may just move on.

This is what it means when scientists say data are "beautiful": not that they have some intrinsic aesthetic appeal, but that they tell a good story about what's happening. Ideally, the story is compelling because the experiments have been done very well. But that's not always the reason.

Some of the most beautiful data I ever saw, in this sense, were presented at Bell Labs by Hendrik Schön in early 2002, at a seminar honoring Bob Willett and the other winners of that year's Buckley Prize. In contrast to Willett's painstaking work over many years elucidating the properties of the even-denominator fractional quantum Hall effect, Schön presented one slide after another demonstrating a wide variety of phenomena in high-mobility organic semiconductors.

The problem was that the story the beautiful data were telling was a lie. Schön's rise and fall were expertly described in Eugenie Reich's 2009 book Plastic Fantastic, and I was later on the committee that concluded that he had committed scientific misconduct.

But honest scientists also must be careful when they choose data that supports the story they want to tell. There is an intrinsic conflict of interest between telling the most compelling story and facing honestly what the data are saying. Decisions about which data to omit, and how to process the remainder, must be handled with great care.

As a rule, scientists overestimate their objectivity in selecting and processing data. As individuals, they are more swayed by expectations than they would like to admit. Collaborators can help keep each other honest, but only to a degree.

One thing that keeps science on track in these situations is that other people may make the same sort of measurements. Often different experimenters have a different idea of what's right, and the back-and-forth helps the field as a whole converge toward the truth.

But what happens when a whole field expects the same thing? There's a real danger that the usual checks and balances of scientific competition will break down, and all the slop and subjectivity of experiments will be enlisted in service of the common expectation.

Just because data is beautiful--that is, tells a good story--doesn't mean that story is right.

Tuesday, February 2, 2010

Tunable Flexibility

Some genes can't be substantially changed without fatal consequences, while tweaking other genes lets organisms gracefully adapt to new situations.

This modular organization, in which some groups of components maintain a fixed relationship to one another even as the relationship between groups changes, is common in biology. In their book, The Plausibility of Life, Marc Kirschner and John Gerhart argue that the weak linkages between unchanging modules are a critical ingredient of "facilitated variation," which in turn makes rapid, dramatic evolution possible. But this long-term adaptability may be, in part, a fortunate side effect of a system that lets individual organisms respond to changes during their lifetime.

Natural selection is not based on compassion. It would be reasonable if both essential genes--those within modules--and nonessential genes--including those forming the weak linkages between modules--mutated equally readily. A mutation within a module might kill its host, but that's the way of progress. Mutations of the linkages could survive, and, by altering the relationship between modules, allow a population to explore new innovations.

Evolution would be more efficient if genes within modules didn't mutate as quickly as those between them. But it's not required.

But what if the evolvability is just one facet of a more general flexibility? At a December 2009 meeting in Cambridge, MA, which I'm covering for the New York Academy of Sciences, Naama Barkai of the Weizmann Institute in Israel showed evidence that genes that evolve rapidly also show greater variability in expression.

To measure the rate of evolution, Barkai's postdoc Itay Tirosh compared different yeast species to see which genes had the most differences. Some genes differed a lot, while others were quite similar.

Tirosh also looked at two measures of the intrinsic variability of gene expression (measured by messenger RNA levels): the degree of change in response to changed conditions, like stress, and the time-dependent variation in expression, or noise. Again, some genes varied a lot, while others stayed quite steady. Moreover, the variable genes were likely to be the same ones that evolved rapidly.

These three correlated measures of gene flexibility were connected with differences in the structure of the promoter, which is the region of DNA near where its transcription into RNA begins. Flexible genes tended to include the well-known "TATA" sequence of alternating tyrosine and adenosine bases, as well as different arrangements of the nucleosomes.

A complete understanding of the role of flexibility in both short- and long-term variation will require a lot more research. But these results support the notion that arrangements that let organisms adapt to the slings and arrows of everyday life also give them the tools to rapidly evolve dramatically new ways of life.

Friday, January 29, 2010

Fusion on the Horizon?

When I arrived at MIT in 1976, fresh off the bus from Oklahoma, nuclear fusion looked like an exciting scientific career. The country was still reeling from "the" energy crisis (oil was over $50/barrel in today's prices!), and fusion was the energy source of the future.

It still is.

The promise has always been compelling, and is often described as "unlimited pollution-free energy from seawater." The fusing of two hydrogen nuclei to form a helium nucleus, releasing abundant energy without the radioactive products of nuclear fission, certainly seems cheap and clean. Indeed, this kind of process is the ultimate source of all solar energy as well, and the H-bomb showed that we can create it on earth.

So the challenges for fusion are not fundamental. They're "just engineering."

Foremost among these challenges is keeping the hydrogen nuclei together when they're heated to millions of degrees. This temperature is needed so they can overcome their natural electrical repulsion, but when they have a lot of energy they're just as likely to go in other directions. Sadly, techniques to confine these tiny nuclei seem to require tons and tons of expensive, high-tech equipment. Of course, advocates of cold fusion, now called "Low Energy Nuclear Reactions," think they don't have to solve this problem, but most scientists are unconvinced.

The traditional approach to fusion, then being pursued at MIT, involves confining a donut-shaped plasma of ultra-hot charged particles by using an enormous magnetic field. One problem is that the plasma finds all sorts of ways to wiggle out of the confinement. Over the decades, researchers have made steady progress in controlling these "instabilities." Recent research, still done at MIT and published online in Nature Physics this week, used a surprising technique of levitating a half-ton magnet in mid-air.

The other mainstream approach is to squeeze and heat hydrogen-containing materials by blasting a pellet with powerful lasers from all sides. Research at Lawrence Livermore's National Ignition Facility, published online in Science this week, showed promising results for this approach.

I've always found the idea of milking a steady stream of power out of occasional explosions inside of a horrendously expensive, delicate laser apparatus confusing. In fact, the long-defunct radical magazine Science for the Peoplepublished an article in 1981 claiming that "inertial confinement" fusion was just a plot by the military to test fusion explosions in the lab. That at least made sense.

The new results seem like steps forward for both approaches, but there's a long way to go. For one thing, neither group actually fused anything. They just set up conditions that seemed promising.

The reality is that no researchers want to actually use fusion-capable fuel in their machines, because it would make them radioactive (the machines, not the researchers). This may sound surprising, since fusion is supposed to be so clean. But although fusion doesn't produce radioactive nuclei, it does make a whole lot of high-speed neutrons. To generate power, researchers would need schemes to extract the energy from these neutrons. But the neutrons also irradiate everything in sight, turning much of the apparatus into hazardous waste, which would make experiments much harder.

But if the researchers keep making progress, they're going to have to use the real stuff soon. They'll look for any fusion at all, and eventually for "scientific breakeven," where they get more energy out than they use to power all the equipment. "Commercial breakeven," where the whole endeavor makes money, is much further down the road.

I wish the researchers good luck; they may yet save our planet. But I'm also glad I didn't decide to spend the last third of a century working on fusion.

Thursday, January 28, 2010

Mix and Match

It's easier to reconfigure a complex system to do new things if it is built from simpler, independent modules. But in biology, modules may be useful for more immediate reasons.

Biological systems, ranging from communities to molecular networks, often feature a modular organization, which for one thing makes it easier for a species to evolve in response to changes in the environment. In some cases, this flexibility might have been selected during prior changes. But modularity can also make life easier for a single organism during its lifetime, and be selected for this reason.

In their book The Plausibility of Life, Marc Kirschner and John Gerhart include modularity as one aspect of "facilitated variation." In particular, they say, genetic changes that affect the "weak linkages" between modules can cause major changes in the resulting phenotype. As long as the modules, the "conserved core processes," are not disrupted, the resulting organism is likely to be viable, and possibly an improvement on its predecessors.

In describing facilitated variation, Kirschner and Gerhart defer the question of whether facilitating rapid future evolution alone causes these features to be selected. Perhaps it does, in some circumstances. But in any case, we can regard the presence of features that enable rapid reconfiguration as an observational fact.

Moreover, the practical challenges of development demand the same sort of robust flexibility that encourages rapid evolutionary change. Over the development of a complex organism, various cells are exposed to drastically different local environments. In addition, genetic or other changes pose unpredictable challenges to the molecular and other systems of the cells. Throughout these changes, critical processes, like metabolism and DNA replication, need to keep working reliably.

To survive and reproduce in the face of these variations organisms need a robust and flexible organization. Features that allow such flexibility should be selected, if only because they improve individual fitness. These same features may then increase evolutionary adaptability, whether or not that adaptability is, by itself, evolutionarily favored.

Whether adaptability is selected for its evolutionary potential or only for helping organisms thrive in a chaotic environment, it has a profound effect on subsequent evolution. A flexible organization including modularity and other features allows small genetic alterations to be leveraged into large but nonfatal changes in the developing creature, so the population can rapidly explore possible innovations.

Wednesday, January 27, 2010

Changing Times

One way to explain the modularity that is seen in biology is that it helps species to evolve quickly as their environment changes.

But the notion that "evolvability" can be selectively favored is tricky intellectual territory, and people can get drawn in to sloppy thinking. Just as group selection must favor more than just "the good of the species," selection for flexibility cannot be grounded in future advantages to the species. To be effective, evolutionary pressure must influence the survival of individuals in the present.

Whether this happens in practice depends on a lot of specific details. Some simulations of the effect of changing environments have not shown any effect. But at a meeting I covered last year for the New York Academy of Sciences, Uri Alon showed one model system that evolves modularity in response to a changing environment.

Alon is well known for describing of "motifs" in networks of molecular interactions. A motif is a regulatory relationship between a few molecules, for example a feed-forward loop, that is seen more frequently in real networks than would be expected by chance. It can be regarded as a building block for the network, but it is not necessarily a module because its action may depend on how it connects with other motifs.

Alon's postdoc Nadav Kashtan simulated a computational system consisting of a set of NAND gates, which perform a primitive logic function. He used an evolutionary algorithm to explore different ways to wire the gates. Wiring configurations that came closest to a chosen overall computation result were rewarded by giving making future generations more likely to resemble them. "The generic thing you see when you evolve something on the computer", Alon said, "is that you get a good solution to the problem, but if you open the box, you see that it's not modular." In general, modules cannot achieve the absolute best performance.

Kashtan then periodically changed the goal, rewarding the system for a different computational output. Over time, the structure of the surviving systems came to have a modular structure. One interesting surprise was that in response to changing goals, the simulated systems evolved much more rapidly than those exposed to a single goal.

But Alon emphasized that this was not a general feature. Instead, the different goals needed to have sub-problems in common. Evolution would then favor the development of dedicated modules to deal with these problems. It is easy to imagine the challenges facing organisms in nature also contain many recurrent tasks, such as the famous "four Fs" of behavior: feeding, fighting, fleeing, and reproducing.

So some biological modularity may reflect the evolutionary response to persistent tasks within a changing environment. But does this explain the wide prevalence of modules? In a future post, I will examine another explanation: that modularity is one of the tools that helps individual organisms adapt to the changing conditions of development and survival during their own lifetimes.

Tuesday, January 26, 2010


When people design a complex system, they use a modular approach. But why should biology?

For us, modularity is a way to limit complexity. Breaking a big problem into a series or hierarchy of smaller ones makes it more manageable and comprehensible, which is especially important if it is assembled by many people--or one person over an extended time.

The key to a successful module is that its "guts"--the way its parts work together--doesn't depend on how it connects to other stuff. The module can be thought of as a "black box," that just does its job. You don't have to think about it again. For this to work, the connections between modules must be weak, limited to well-defined inputs and outputs that don't directly affect its internal workings.

When it's done right, a module can be easily re-used in new situations. For example, the part of a computer operating system that offers a help menu is tapped by lots of programs without worrying about how it works.

But biology is not designed. Biological systems emerge from an evolutionary process that rewards only survival and reproduction, with no regard for elegance or comprehensibility. Re-usability sounds like a good thing in the long run, but doesn't help an individual survive in the here and now.

Nonetheless, modularity seems to be widespread feature of biological systems. Your gall bladder, for example, is a well-defined blob that receives and releases fluids like blood and bile, but otherwise keeps its own counsel. It can even be removed if necessary. And it does much the same thing in other people and animals.

At a smaller scale, many of the basic components of cells are the same for all eukaryotes. They have the discrete nuclei that define them, and they also have other organelles that perform essential functions, like mitochondria that generate energy. These modules work the same way, whether they happen to appear in a brain cell or a skin cell.

Even at the molecular level, re-usable modules abound. For example, although ribosomes don't have a membrane delimiting them, they consist of very similar bundles of proteins and RNA for all eukaryotic cells, and only modestly different bundles for bacteria. In addition to such complexes, many "pathways," or chains of molecular interactions, recur in many different species.

We have to be careful, of course: simply because we represent complex biological systems as modules doesn't mean they are there. The modules we think we see could simply reflect our limited capacity to understand messy reality. But when researchers have looked at this question carefully, they found that modules really exist in biology, much more than they would in a random system of similar complexity.

But why should evolution favor modular arrangements? And how does a modular structure change the way organisms evolve?

Saturday, January 23, 2010

Don't Fear the Hyphen

Hyphens come up a lot in scientific writing. Or at least they should. Unfortunately, small as they are, many people are afraid of them.

One problem is that hyphens get used for some very different purposes, although all of them tend to bind words or fragments together. I'm only going to talk here about making compound words with them.

Another problem is that you don't have to use a hyphen if you don't have to: if the meaning is clear without it (semantics), you're allowed to omit it (punctuation). This makes it very hard to figure out the real punctuation rules, since they don't arise from syntax alone.

Compound words are hard to predict. Sometimes two words are locked together to form one new word, as in German: eyewitness. Sometimes the two words keep their distance even as they form a single, new concept: eye shadow. But some pairs are bound with the medium-strength hyphen: eye-opener. This mostly happens when both words are nouns, so that the first noun is acting as an adjective, making the second more specialized.

There's really no perfect way to know which form is favored, and it varies over time. Novel pairings generally start out separated and become hyphenated when they seem to represent a unique combined identity. When that combined form becomes so familiar that it is easily recognized, the hyphen disappears, (unless the result would be confusing, as in a recent example, "muppethugging," or the less novel "cowriting.") You just have to look in an up-to-date dictionary.

For the single compound words and those that are always hyphenated that's the end of the story. The problem is that the isolated pairs of words sometimes should be hyphenated, too. This hyphenation is not a property of the pair, and it can't be found in the dictionary. It's a real punctuation mark, and depends on the details of the sentence.

The hyphen belongs when the pair is used as an adjective, known as a compound modifier, as in the previous "medium-strength hyphen." But generally the hyphen is omitted when the adjective occurs later on: "The hyphen has a medium strength." But the AP Stylebook (not hyphenated!) says that when that "after a form of to be, the hyphen usually must be retained to avoid confusion: The man is well-known." AP also has a rule that when for an adverb-adjective pair, the hyphen is not used if the adverb is "very" or ends in "-ly."

OK, this is getting confusing, so let's regroup: The important principle is that the hyphen is there to make clear when there is a link that might otherwise be missed. If we talk about a "highly important person," it's clear that it's the importance that's high, not the person. But if we talk about a "little known person," it's not so obvious whether it's the knowledge or the person that's diminutive, because
"little" can be an adjective or an adverb. If it's an adjective, you might have said "little, known person, " but "little-known person" avoids any chance of confusion.

The problem is worse when the first word is a noun, because it doesn't really give you any syntax clues about whether it's acting as an adjective or adverb. This issue comes up frequently in science writing. I imagine most people will realize that a "surface area calculation," refers to a calculation of surface area, and not a surface calculation of the area. But sometimes it's hard to know what will be confusing. I prefer to assume as little possible about what my readers are getting, so I would use "surface-area calculation." But many editors correct this (and I generally defer to them).

Unfortunately, there is a compelling to reason to be sparing with hyphens, which is that they can't be nested. This also comes up frequently in science writing, when a compound modifier is constructed from another compound modifier, as in "surface area calculation results." If we used parentheses to tie together related words, this would be rendered "((surface area) calculation) results." But there's no way to indicate priority with hyphens.

The right thing to do is to decide what level of compounding needs to be made explicit, and retain hyphens at all levels up to that, for example "surface-area-calculation results." Sadly, I frequently see something like "surface area-calculation results." That's just wrong, since the hyphen ties together "area" and "calculation" more strongly than "surface" and "area." In this case you'd be better off leaving out all hyphens and hoping for the best.

Of course, as in most cases of confusing writing, the best alternative is "none of the above": recast the sentence so that it doesn't have compound-compound modifiers. "The results of the surface-area calculation" leaves nothing to chance. But it's clunkier.

Thursday, January 21, 2010


About 540 million years ago, virtually all the basic types of animals appeared in a geographic eyeblink known as the Cambrian explosion.

The late Stephen Jay Gould used this amazing event (if a period of millions of years can be called an event) in his 1989 bestseller Wonderful Life: The Burgess Shale and the Nature of History to debunk two popular myths about evolution. First, evolution is not a steady march toward more and more advanced forms (presumably culminating in us). Second, diversity does not steadily increase. The Cambrian was populated by many types of creature no longer around today, some so exotic as to be worthy of a science fiction film. Rather than growing steadily bushier, the tree of life was later brutally pruned, and we grew from the remaining twigs.

In The Plausibility of Life, Marc Kirschner and John Gerhart highlight another critical facet of this amazing period: since that period of innovation, no more than one new animal type has appeared. The diversity we enjoy today is built from basic parts that were "invented" in the Cambrian explosion.

Rather than get into how and why this happened, for now let's just regard it as an observational fact from the fossil record:

True innovation is rare.

We know this is true in human affairs. Producers of movies and TV shows, for example, often play the odds by re-using and recombining proven concepts. The same goes for technology, where many innovations combine familiar elements in new ways. It's faster, it's cheaper, and it's safer.

For whatever reason, the evolutionary history of life is a series of one-time innovations. After they are adopted, these "core processes" change very little, even though they have eons of time to do so. That doesn't mean that the organisms themselves stay the same--far from it. But they use the core processes in different ways, just as a bat wing is built in the same way as the human hand.

Kirschner and Gerhart discuss several examples of conserved core process, starting with the fundamental chemistry of DNA, RNA, proteins and the genetic code that connects them. Every living thing on earth uses the same chemistry. The appearance of the eukaryotic cell is defined by the presence of the nucleus, but a host of other innovations occurred at the same time. All eukaryotes from amoebae to people share these features even now. The joining of cells into multicellular organisms was also accompanied by innovations that helped the cells stick together and cooperate. Every plant and animal has retained these features largely intact.

Like the animal body plans of the Cambrian explosion, these burst of innovation occurred over relatively short periods, and were permanently added to the toolkit. All of the animals we know, from cockroaches to cockatoos, from squirrels to squids, arose by applying those tools in new ways.

But what is it about the core processes that makes them so resistant to change? What makes them so useful? And are the answers to these questions related?

Wednesday, January 20, 2010

Monday Morning Quarterbacks

Many news stories simply report the facts; the better ones put the facts in the context and explore their likely impact.

Then there are stories that explain "why."

I have a simple assessment for these explanation stories that saves me a lot of time. This is it:

"What's new?"

More specifically, is there anything about the "explanation" that wasn't known before the event actually happened? If not, click on.

A classic example is election-night analysis. Even before all the votes are counted, pundits materialize to explain what message the voters were trying to send, and which campaign screwed up.

Unfortunately, most of what they say was just as true the day before. Sure, they now have more precise results, and maybe a geographic breakdown and some exit polls. But most of the commentators' facts were already known to everyone. Somehow the surrounding story seems more profound once it aligns with actual events--and once the arguments for the other side have been conveniently forgotten.

Stories about stocks can also be amusing, or pathetic. It's hard to find any report on a market move that doesn't attribute it to concern about Chinese exports, or some such. When the market is truly uncooperative, analysts resort to saying that it "shrugged off" some really dire economic news. Sorry, guys. If you knew how the market would react, you'd be rich.

Unfortunately, the challenge of explanation applies to the real economy as well. I'm a regular reader of Paul Krugman, who warned a year ago that the stimulus package would likely not generate enough jobs. Sure enough, we now have an unemployment rate that once would have been thought unacceptable. So was Krugman right? Or were the conservatives who said the stimulus just wouldn't work?

The sad fact is we hardly know any better now than we did a year ago. We know what happened, of course. But we still don't know what would have happened if we had done something different. Evaluating such past hypothetical situations requires the same kind of modeling, and relies on the same ideological assumptions, as predictions do. Unsurprisingly, the experts mostly see the past the same way they saw the future.

The same problem may apply to climate change. I don't usually think of the existence of global warming as a political question (as opposed the policy response). In a hundred years, after all, liberals and conservatives will both suffer the same heat and drought and see the same sea level rise, or not. But even then, they probably won't agree on what we did, or didn't do, to get them there. We don't have a duplicate world to do control experiments on.

As Yogi Berra is quoted, "Predictions are hard, especially about the future." Sadly, they are almost as hard about the past.

Tuesday, January 19, 2010

The Plausibility of Life

When creationists, or, as they would have it, advocates of "intelligent design," talk about the "weaknesses" of evolutionary theory, knowledgeable people generally roll their eyes and ignore them. This is appropriate, as these advocates only raise the questions in a disingenuous attempt to promote a religious agenda, under the pretense of open-mindedness and "teaching the controversy." In truth, there is no controversy in the scientific community about the dominant role of natural selection (evolution, the theory) in shaping the observed billions of years of change (evolution, the fact) of life on this planet.

But this response obscures the fact that very interesting issues in evolution remain poorly understood.

I'm not referring to the direct exchange of genetic material between single-cell organisms, although that does call into question the tree-like structure of relationships between these simple species. But at the level of complex, multi-cellular creatures like ourselves, this "horizontal gene transfer" is unimportant compared the "vertical" transfer from parents to offspring. The tree metaphor is still intact.

But even for complex creatures-- especially for complex creatures--there are important open questions about how evolution works in detail. The insightful (and cheap!) 2006 book, The Plausibility of Life, by Marc Kirschner and John Gerhart, began to frame some answers to these questions.

The fundamental ingredients of evolution by natural selection were laid out by Darwin: heritable natural variations lead some individuals to be more likely to survive and thus to pass on these variations.

We now know in great detail how cells use some genes in the DNA as a blueprint for proteins, and how these proteins and other parts of the DNA in turn regulate when various genes are active. And we know, as Darwin could only imagine, how that DNA is copied and mixed between generations, only occasionally developing mutations at single positions or in larger chunks. We understand heritable variation.

We also understand the arithmetic of natural selection, which confirms Darwin's intuition: a mutation that improves the chances that its host will survive to be reproduced will spread through a population, while a deleterious mutation will die out (although evolution is indifferent to most mutations). This all takes many generations, but the history of life on earth is long.

But there is something missing, what Kirschner and Gerhart call the third leg of the stool: how does the variability at the DNA level translate into variability at the level of the organism? Selection must occur at this higher level, the level of phenotype, but can only be passed on at the level of the genotype. How do we close this loop?

It would be easy if a creature's fitness were some average of the fitness of each of the three billion bases in the DNA, but it's not that simple. For example, if two proteins work together as a critical team, a mutation in one can kill the organism, even if they could be an even better team if they both mutated in a coordinated way.

This sounds disturbingly reminiscent of the neo-creationist argument that life is so "irreducibly complex" that there must have been a creator--er, designer. But Kirschner and Gerhart don't believe that for a second. What they argue instead is that organisms are constructed so that genetic change can dramatically alter phenotype without sacrificing key functions--in a process they call facilitated variation.

In future posts I will discuss clues that this construction--I'm avoiding the word "design"-- is present in organisms today, and some of the principles it follows.