Wednesday, March 31, 2010

Picturing Quantum Mechanics

They say a picture is worth a thousand words. But what if those words are wrong?


Very cool recent experiments demonstrated a chemical reaction between molecules below a millionth of a degree (in Science, subscription required). My latest story for Physical Review Focus describes theoretical modeling of this reaction. We accompanied the story with this picture from the news release issued by the Joint Quantum Institute (a partnership between the National Institute of Standards and Technology and the University of Maryland), where the work was done.

It's a pretty picture, with its superhero color scheme and all, and it satisfies our need to avoid a solid block of text. But although it might not be a bad illustration of a room-temperature chemical reaction, it distorts much of what makes these ultra-low-temperature reactions special.

It's clear in the picture that two diatomic molecules are approaching each other, with dramatic consequences in store. The details of how the artist represents the bonds connecting a potassium and a rubidium atom in each molecule don't bother me too much. It doesn't match either of the customary representations, which are ball-and-stick models and the more accurate space-filling models, but there's no perfect way to represent something that can never be seen with visible light. Of course everyone knows that potassium atoms are green, but we'll let that slide, too.

The really problematic part of this picture is very difficult to avoid: the molecules really aren't anywhere, in the sense the picture conveys.

As first shown by experiments at Bell Labs in 1927, matter acts as waves as well as particles. At temperatures below a millionth of a degree, the relevant wavelength for these molecules is hundreds of nanometers, which is much, much larger than the separation of molecules shown in the experiment. There is no meaning to saying that these molecules are separated by such a short distance. They are simultaneously close and far away.

One way to think about this is to invoke Heisenberg's uncertainty principle. According to this principle, if you know an object's momentum with very high precision, you can't, even in principle, know its position very accurately. For these ultracold molecules, the momentum is almost zero, with very high precision, so you can only know where it is to the nearest hundreds of nanometers.

There's a second problem, too. The picture shows the molecules with particular orientations in space. That may not seem strange, but the molecules in the experiment were prepared in the rotational "ground state," with the lowest possible energy. Like the s-orbitals of electrons in a hydrogen atom, this state is spherically symmetrical. This means that the molecule is equally likely to be pointing in any direction. This isn't the same thing as saying we don't know what direction it's pointing (even though it does). Quantum mechanics says that it has no direction, at least until an experiment requires it to.

So the reacting molecules really aren't at any particular distance from one another, and they don't have any particular orientation relative to each other. That's one of the things that makes this chemical reaction--and the theoretical description of it--so interesting.

But good luck drawing that.


 

Thursday, March 25, 2010

Flyfire

When I was an undergraduate at MIT in the late 1970s, one of the most impressive people I knew was my roommate Walter.

Walter was a mechanical engineer, and knew an incredible amount of just-plain-practical stuff about how to make things work, from types of stainless steel to the ins and outs of convective cooling.

The rest of us vicariously enjoyed his entry in Woodie Flowers' design completion, the 2.70 contest (named for the number of the design course). The idea was to take a box of stuff--tongue depressors, rubber bands, and so forth--and turn it into a machine that would beat other students in a glorified "king-of-the-hill" like task. This was way before I heard the phrase KISS, "Keep It Simple, Stupid," but we all learned that successful designs took this minimalist idea as a core principle.

This sort of contest has been picked up by others, even bringing high-school teams into the game, for example in the FIRST competitions. It's a great way to inspire imaginative people in ways that go way beyond the ordinary classroom experience. And it's a hands-on demonstration of the creativity that underlies true design.

Walter also worked with Otto Piene, the artist who headed MIT's Center for Advanced Visual Studies. Among other things Walter helped sew enormous inflatable anemones for Piene's Centerbeam project on the mall in Washington, D.C., as well as designing and building beam-steering mechanisms for an early laser show.

I'd forgotten all of this until reporting my latest story for CACM on a new project from MIT, which brings together two teams. One group comes from the Department of Urban Studies and Planning, which is exploring ways that distributed technology can illuminate or improve urban life, such as tracking trash disposal or using cell phones to monitor commuting. The second group is from the "Aero and Astro" department, where researchers have been exploring autonomous vehicles for military and other applications.

The Flyfire project aims to do something even stranger: to use LED-bearing helicopters to make a giant display. Although there may be some practical need for such a display, the most compelling vision for now is to create giant public art projects, of the kind pioneered by Piene and others.

Maybe it will never be particularly useful. Maybe it will never even work. But these grand visions tap into something essential in the human spirit.

Monday, March 22, 2010

Mining Evolution

Can a worm get breast cancer? And how would you know if it did (since it doesn't have breasts)?

Biomedical research has made great use of "disease models": conditions in lab organisms that resemble the human diseases that the researchers really want to learn about. By seeing how a model condition develops and how it responds to drugs or other changes, researchers can make better guesses about what might help people. But finding such disease models usually requires some obvious similarity between the outward manifestations of the disease in humans and the animal subjects.

In the Proceedings of the National Academy of Sciences, a team from the University of Texas in Austin led by Edward Marcotte use the underlying molecular relationships to find connections between disorders with no such obvious relationship. In addition to a worm analog of breast cancer, they found an amazing connection between plant and human disorders. The analysis of plants' failure to respond to gravity led them to human genes related to Waardenburg syndrome. This syndrome includes an odd constellation of syndromes resulting from defects in the development of neural crest cells.

I saw Marcotte speak about this fascinating work at the conference I attended last December in Cambridge, Massachusetts. My writeup should be posted soon by the New York Academy of Sciences.

Biologists have repeatedly found that the networks of interacting molecules are organized into modules. Over the course of evolution, these modules can be re-used, often for purposes quite different from their original function. This much is well known, although seeming the persistence of modules over the vast evolutionary separation of plants and people is very dramatic.

What the Austin team did was to devise a methodology to identify related molecular modules in different species even without relying on similar outward manifestations, or phenotypes. They combed the known molecular networks of different species for modules that had a lot of "orthologous" genes: those that had retained similarity--and similar relationships--through evolution. They call the particular traits associated with these genes "orthologous phenotypes," or "phenologs." "We're identifying ancient systems of genes that predate the split of these organisms, that in each case retain their functional coherence," Marcotte said at the conference.

The importance of this scheme is that many molecular networks are poorly mapped, especially in humans. But if a particular gene is part of the network underlying a phenotype in another species--such as poor response of a plant to gravity--it's a good guess that the corresponding gene may be active in the orthologous phenotype in people. The researchers in fact confirmed many of these predicted relationships. Some of these genes were previously known to relate to disease, while others were new. The researchers created a list of hundreds more that they still hope to check.

These genes could give researchers many potential new targets for drugs or other interventions in diseases. So evolution is not just helping us to understand the biologic world we live in, but helping us devise ways to improve human health.

Friday, March 19, 2010

The Language of Life

Ten years after the announcement of the draft human genome, the world of human health seems in many ways unchanged. But it is changing, in many profound ways, says Francis Collins, who led the government-funded part of the genome project and is now the director of the National Institutes of Health.

Collin's new book, The Language of Life: DNA and the Revolution in Personalized Medicine, aims to help the public to understand the changes so far, and those that are still to come. He covers a wide range of topics, but the guiding theme is the promise of "personalized medicine" that tailors treatment for each individual based on their genetic information.

As he shows in his occasional columns in Parade magazine, Collins is a skilled communicator of complex medical topics, including their ethical and personal dimensions. He steers authoritatively but caringly through challenging topics like race-based medicine. On the pros and cons of genetic screening, for example, he describes the desirability of genetic tests as a product of not just the relative and absolute changes in risk associate with a gene, but the seriousness of the disease and the availability of effective intervention.

I confess that I was worried that Collins might let his well-publicized Christian beliefs color this book (his previous book is called The Language of God). They did not. His beliefs arise a few times, for example in the context of stem cell research, but he deals with serious ethical questions with great respect for different points of view. In addition, as should be expected for any modern biomedical researcher, he repeatedly and matter-of-factly draws important insights from evolution.

On the whole, the writing is accessible to general readers, even as Collins discusses complex scientific topics. On occasion, however, he shows an academic's tolerance for complex, caveat-filled verbiage, as when he writes, "Therefore, at the time of this writing, the effort to utilize genetic analysis to optimize the treatment of depression has not yet reached the point of effective implementation." This stilted language is the exception, but he also slips into occasional jargon that might leave some readers temporarily stranded.

A trickier issue is Collins' frequent use of patient anecdotes to illustrate how genetic information can lead to better decisions. These human stories, drawn from his long research and clinical experience, certainly succeed at Collins' goal of inspiring hope for the potential of personalized medicine, as well as showing clearly what it mean to people. But the succession of optimistic stories begins to seem skewed to draw attention away from structural challenges in American medicine that could seriously undermine this potential. When Collins mentions these issues, it tends to be in careful euphemisms: "A recent study estimated that in the United States each year, more than 2 million hospitalized patients suffer serious adverse drug reactions, with more than 100,000 of those resulting in a fatal outcome."

In a similar vein, Collins describes the successful identifications of gene variants associated with macular degeneration. The fact that similar studies for other diseases have been rather disappointing doesn't seem to bother him much. Perhaps his decades in research, including the identification of the cystic fibrosis gene, have made him confident that these problems too will pass. But he comes across as a very optimistic person.

In spite of my quibbles, I think The Language of Life succeeds well at putting the omnipresent news stories about genetic advances in a useful context of individual medical choices. As a writer who covers these areas of science, I didn't learn an awful lot of new things from the book, but I think most people will, and will enjoy themselves in the process.

Tuesday, March 16, 2010

Mathophobia

In many fields of science and engineering, any technical argument must be formulated mathematically if it is to be taken seriously. In contrast, in popular writing--even about science and engineering--including a single equation is a known recipe for getting many readers to click on to the next story. Understanding this discordant reaction to math reveals a lot about how popular and technical writing differ.

Back in 2003, when I was thinking about morphing from a practicing scientist into a science writer, I seriously questioned how I could possible to explain scientific arguments without variables and equations. Without the precision of a mathematical description, I wondered, how could I really know whether readers interpreted ordinary English phrases the way I intended? Moreover, without a algebraic description, how could readers judge how well a model matches observations?

Interestingly, in the years since, I've almost never felt hobbled by not being able to explain things with equations.

A lot of the difference arises from the different goals of journal articles and popular stories, and their very different sources of authority.

In a journal article, the goal is to convince other experts. In other words, the article should ideally be self contained, assembling all the relevant details so that an independent observer can make up their own mind.

In contrast, a popular science story aims merely to describe the conclusions, not prove them. As David Ehrenstein, the editor at Physical Review Focus, once told me, the goal is to present a plausibility argument for the conclusions: to give enough context and explanation that readers can appreciate what's being claimed and who's claiming it.

The last point is also critical: by and large, popular writing gains its authority from the quoted judgments of experts, not directly from the model or observations. Indirectly, of course, this authority comes from the reputation of the writer and the publication (that is, the editors), because they are the ones who decide which commentators are worth quoting. (Of course, those commentators must also respond to emails or phone calls!)

Since the goal is plausibility and a qualitative understanding, the limited precision of ordinary writing is usually good enough to convey the message.

Wednesday, March 10, 2010

Compartments

The development of a single cell into a complex creature such as you or me is almost miraculous. Early scientists were so baffled by this process that some supposed that the fertilized egg might contain a complete specification of the final organism.

This is just silly.

Still, it has only been in the last few decades that researchers have worked out how a smooth starting pattern in the concentrations of a few molecules develops into a complex, highly structured pattern. These molecules act as transcription factors that modify the activity of genes making dozens of other transcription factors, which spontaneously form patterns that serve as the invisible framework driving all subsequent development. The essence of this understanding is described in the charming 2006 book Coming to Life: How Genes Drive Development by Christiane Nüsslein-Volhard, who shared a Nobel Prize for her germinal work on the development of the fruit fly, Drosophila melanogaster.

A central part of this view is that different regions of the developing embryo are uniquely identified by the particular combination of transcription factor concentrations they contain. Each combination (influenced in part by its neighbors) stimulates the cells in that local region, or "compartment," to develop toward a particular final structure, such as the hindmost edge of a wing. Mutated animals that are missing the genes for particular factors develop characteristic problems, like extra wings or legs sprouting where their antennae should be. Researchers have also learned how to label the molecules with fluorescent dies that directly reveals the invisible patterns of factors that drive later development.

In their book The Plausibility of Life, Marc Kirschner and John Gerhart include developmental compartments as one of the key elements, along with weakly-linked modules and exploratory behavior, of "facilitated variation": the ability of organisms to respond to small genetic changes with large but viable changes in their structure.

Compartmentalization is in some ways similar to exploratory behavior, in which developing body structures such as blood vessels respond to local stimuli such as chemicals emitted by oxygen-starved cells. In both cases, individual cells respond to nearby cues, without needing to refer to some master plan. In the case of compartmentalization, however, both the cues and responses are more general chemical changes, in contrast to the more apparent structural changes seen in exploratory development.

What the two processes have in common is that they allow development that is flexible enough to succeed in diverse situations, for example when nutrients are scarce or the embryo is damaged. This sort of robustness presents a clear evolutionary advantage, since it makes it more likely that a complex organism will grow up and survive to reproduce.

But in addition, robust development lets organisms deal with genetic changes. Although many mutations are fatal, some cause dramatic changes in body organization or other features, while the flexible development process adapts to the new situation. As a result of this facilitated variation, evolution is able to explore a wider variety of strategies and move quickly to new solutions.

For Kirschner and Gerhart, this flexibility is key to understanding the nature and rapidity of evolution. A population can explore the potential advantage of a longer hindlimb, for example, without the need to separately coordinate changes in bone, muscle, blood vessels, nerves, and so forth. Adaptive development takes care of all that.

In fact, the pattern of compartmentalization appears to have been much more stable over the course of evolution than the details of body structure have been. The appearance of compartmentalization and the other features that allow facilitated variation look like the crucial revolutionary events that made rapid evolutionary change possible.

Tuesday, March 9, 2010

Poles Apart

Over the past few months, scientists studying global warming have been rocked by a series of awkward revelations. In November, someone made public more than 1000 emails, many quite damning, from climate researchers at the University of East Anglia (UEA), in what skeptics successfully branded as "Climategate." December and January saw the authoritative Intergovernmental Panel on Climate Change (IPCC) admitting that their published claim that Himalayan Glaciers would disappear by 2035 was improperly sourced and wrong, even as critics pounced on other instances in which the panel violated their own procedures for insuring that science was properly represented in their influential reports. (The IPCC had shared the 2007 Nobel Peace Prize with Al Gore.)

Not surprisingly, different people responded completely differently to these events. Critics see the revelations as confirming that global warming, and especially its human source, is just a hoax. Others counter that the revelations that scientists are fallible human beings do not in the least affect the overwhelming evidence for man-made climate change. Unsurprisingly, both extremes find confirmation for what they already believed.

They are also both wrong.

First let's examine the hoax theory. Although we don't know who released the emails, they look to have been culled from more than a decade of exchanges and chosen to best incriminate the mainstream climate researchers, both those at UEA and their correspondents. What is striking is that the emails do not
reveal any serious evidence that the researchers are fabricating the global temperature rise.

To be sure, the emails show serious misbehavior, including a request from UEA researcher Phil Jones that others delete emails to avoid a freedom-of-information inquiry. Other emails suggest that the researchers hoped to tweak the IPCC process to exclude legitimate, peer-reviewed papers that they didn't regard as credible.

But if the emails showed evidence of a true hoax, the critics poring over them haven't found it--and not for lack of trying. Instead, they have highlighted examples of ambiguous or unfortunate wording, which are conveniently picked up by the likes of Sarah Palin. But although Palin may not know any better, the critics understand that when Jones referred in 1999 to Penn State researcher Michael Mann's "trick" to "hide the decline," he was describing a technique to de-emphasize the inconvenient truth that tree-ring data don't match measured temperatures, which were clearly rising.

This is a serious matter: if the tree-ring data are not good proxies for temperature in cases where we know the temperature, how can we trust them in cases where we don't know the temperature? But as far as I know, the published papers acknowledge this manipulation, which is an acceptable way to deal with omissions of questionable data. Nothing is being hidden. In fact, this whole issue was addressed by the U.S. National Academy of Sciences in 1996, who confirmed the unprecedented nature of the current warming trend.

The critics know this. Their use of this and similar examples shows that they are less interested in the overall truth than in scoring points and discrediting mainstream climate science.

But the disingenuous actions of the critics do not excuse the behavior of the mainstream scientists.

It is tricky to interpret what must have been seen as private correspondence between colleagues. Nonetheless, the emails seem to show that the East Anglia scientists did not really trust the processes of science, or at least the political decisions based on the science. The researchers in the emails are acutely aware of the political context of their results, and present the data to make their case. For example, UEA tree-ring expert Keith Briffa says "I know there is pressure to present a nice tidy story as regards 'apparent unprecedented warming in a thousand years or more in the proxy data' but in reality the situation is not quite so simple." Briffa, whose data contains the "decline," commendably goes on to argue for a more honest and nuanced description of the data.

It's not unusual for researchers to choose data to tell a particular story. And there really is no evidence that the researchers suspected the story they were telling--of unprecedented, industrially-caused warming--was wrong. But it is clear that the handful of teams around the world who evaluate historical climate trends displayed their data to support a clear end goal. Subsequent revelations show that this mindset also affected the choice of references in the IPCC report, especially with regard to the impacts of climate changes.

One of my correspondents, a biologist who uses complex biological models, likens the climate-science consensus to Lysenkoism--the Soviet era anti-Darwinist biological agenda. I take this person to mean that dissent from the reigning paradigm is not accepted in mainstream scientific circles: anyone questioning the "consensus" is quickly branded a "denier." This is not the way to do science, especially for something as important as climate change. And what it means is that the "overwhelming consensus" is not as convincing as it seemed a few months ago.

But it doesn't mean that the consensus is wrong.


 

Monday, March 1, 2010

Survival of the Most Entangled

Most people think quantum-mechanics affects only atomic-sized objects, but many experiments have shown that it applies over many miles. In an experiment last year, for example, researchers sent pairs of light particles, or photons, between two of the Canary Islands off the coast of Africa, a distance of 144 kilometers.


This picture dimly shows La Palma, where the photons started, as seen from Tenerife, where they were detected.

Although the Austrian team, led by Anton Zeilinger, only detected one in a million of the pairs they sent, they found that these pairs retained the critical property of entanglement. This means that results of measurements on the two particles are related in ways that can't be explained if each particle responds to the measurement independently: the pair acts like a single quantum-mechanical entity. Such pairs can be used to securely transmit information over long distances.

My latest story at Physical Review Focus describes a theoretical analysis of this experiment by researchers in Ukraine and Germany. They suggest that the pairs that survive the half-millisecond trip must have had unusually smooth sailing through the turbulent atmosphere, and that this is part of the reason why they are still entangled. (The motion of the air, like the shimmering of a mirage in the desert, generally disrupts the light transmission, but there are short moments of clarity.) This is a pretty comprehensible idea, so in the story I was able to sidestep a lot of interesting issues about how the entanglement was measured and what it means.

For example, the experimenters delayed one photon by about 50 ns by passing it through a fiber before sending it after the other one. That's not a long time, so the atmospheric conditions probably looked pretty similar to the two photons. Since they were subjected to much the same conditions, it doesn't seem so surprising that they would remain entangled. In fact, the original experimenters were pretty pleased that it all worked, but clearly they were hoping it might or they wouldn't have gone to the trouble.

Sending the two photons on the same path certainly isn't the most demanding task, either. More impressive would be sending them on different routes to a final destination where they were compared. But sending an entangled pair is good enough for some quantum communication schemes.

What made the paper particularly interesting was the conclusion that the turbulent atmosphere would be better than, say, an optical fiber that had the same average loss, because the fiber's properties wouldn't change with time. Zeilinger expressed pleasant surprise that the rare moments of exceptional clarity would more than make up for the times when the atmosphere was worse than usual. Still, having no turbulence at all (or a very clear fiber) would be even better.

Exploiting quantum mechanics in secure long-distance communication, for example via satellites, looks more realistic than ever.