Thursday, February 25, 2010

Targeting Cancer

Amy Harmon of The New York Times had an excellent three-part series this week called "Target Cancer." She follows one clinician/researcher as he pursues a "targeted" treatment for melanoma, which aims at the protein produced by a gene called B-Raf that is mutated more than half of the time in this skin cancer.

The series does a great job in following the emotional roller-coaster ride of the doctor, and of course his patients. One early targeted drug doesn't work at all, perhaps because it also attacks normal cells and the side effects become intolerable before the dose is high enough to affect the cancer. A new drug seems not to do anything, but then the team decides to wait for the drug company to reformulate it to deliver higher effective doses.

The results are spectacular: the new formulation causes a virtually unheard of remission in the cancer, and raises hopes in formerly hopeless patients and in the doctors. The excitement and the potential are palpable as some patients dare to hope and others can't bear to. But within a few months, the patients are dying again.

The new drug is an example of personalized medicine, since it is effective only for patients with a particular mutation. There are a few other examples of therapy tuned to patients with a particular genetic profile, such as the breast-cancer drug erbitux and the anticoagulant warfarin (Coumadin).

But this treatment is actually for cancers with a particular mutation--a mutation the normal cells of the patient don't have. Cancers cells generally have more and more of mutations as the disease progresses, because it disrupts the normal quality-control mechanisms in the cell. A study announced last week (registration required) showed that the specific pattern of mutations could be used to monitor the ebb and flow during treatment, although it doesn't look practical yet for tailoring treatment.

Unfortunately, as described in this series, even when a drug targets a mutation in a particular patient's cancer, cancers often develop alternate routes to proliferation. Harmon alludes to one approach to this problem: a multi-pronged "cocktail" that attacks many possible mutations at once. Such cocktails are standard, for example, in treating HIV/AIDS.

Without vilifying the drug companies, she explains some challenges for these profit-oriented companies in pursuing this approach. In particular, even if the cocktail may ultimately be more effective, getting approval might delay or threaten their profits from the drug they have in hand, even if it only extends life for a few months. This is especially true if other drugs in the cocktail are owned by competing companies. In any case, the difficulties in testing multiple drugs make it much harder to know what is effective and what side effects may appear.

The idea of analyzing molecular networks and attacking them at many points simultaneously is a recurring theme in systems biology. But sometimes it seems very far in the future.

Monday, February 22, 2010

Stoner Magnetism

My latest story at Physical Review Focus describes experimental evidence that a missing atom in a chicken-wire-like sheet of carbon can hold a single extra electron.

Theorists have long expected this to be the case, and that unpaired electrons on such vacancies might join up to make an entire single-atom-thick graphene sheet magnetic at relatively high temperatures. Many researchers are excited about the rapid and unusual motion of electrons in these sheets, and IBM researchers recently described a graphene field-effect transistor, grown on silicon carbide, whose expected frequency (fT) exceeds 100GHz. If the layers are also magnetic at normal temperatures, this material could be fun and potentially practical for spintronics, which manipulates both the charge and magnetic properties of electrons.

The actually experiment didn't directly show magnetism, though, just a state that looked like it should hold only one electron. The researchers used scanning-tunneling microscopy to look at a clean, cold graphite surface, which includes many stacked graphene-like layers. In fact, the authors suggest that magnetism may exist in graphite, but not in graphene, because in the latter the effects of two equivalent carbon positions for a vacancy may cancel each other out.

It turns out to be a little bit tricky to explain the connection between local spins, which naturally carry a magnetic moment, and magnetism in a bulk material.

The usual story is straightforward: some types of atoms (or vacancies) naturally have a magnetic moment, "like a tiny bar magnet." Nearby moments exert forces that tend to align their neighbors, either the same way or oppositely. If it's the same, then the moments on many different atoms can all line up to form a net magnetization in a large sample, if the temperature is not so high that they get jostled out of position.

This description is correct--but only for some magnets.

For other magnets, it's just not accurate to say that the atoms each have magnetic moments that line up with each other. In these so-called "itinerant" magnets, the magnetization comes from the metallic electrons washing over all of the atoms. In this case, preference for one direction or another at a particular atom develops only as a part of the magnetization of the whole sample.

Mathematically, itinerant magnetism takes the form of an instability, in which the energy benefit of aligning the moments of the electrons overcomes the energy cost of doing so. A simple description was developed back in the 1940s by Edmund Stoner at the University of Leeds, and his name is still used to convey the ideas. (I apologize to anyone who expected this post to be about the natural charisma of pot-smokers.)

Of course, the distinction between the "local-moment" and "itinerant" magnetism is often somewhat fuzzy, and for the purpose of explanation to the general public it may not seem that important. But to people who understand the issues, getting it wrong is unforgivable, as I found out to my chagrin after using the above simple local picture in my Focus story on the 2007 Physics Nobel on Giant Magnetoresistance (GMR).

GMR read heads in disc drives can be seen as a simple type of spintronics device. In more sophisticated devices that people dream about, electrons will carry their magnetization to new locations, so it's important to be clear on the nature of that magnetism.

Wednesday, February 17, 2010

Trailblazing

What a piece of work is a man! For that matter, what an awesomely complex apparatus is any large organism, from a dog to dogwood!

But as we learn more about biology, our awe shifts from the intricate cellular arrangements in mature multicellular life to the ways these structures arise during development from simpler (but not simple) rules. Even if we can accept simple examples of self-organization, like the spontaneous arrangement of wind-blown sand into regular dunes, the self-assembly of living creatures seems to be a different scale of miracle. But researchers have repeatedly found that simple rules, in which cells respond to local cues like chemical concentrations and mechanical stresses, suffice to describe how various aspects of our complex bodies develop.

In The Plausibility of Life, Marc Kirschner and John Gerhart describe this rule-based strategy, which they call "exploratory behavior," as a very effective way for organisms to develop dependably in the face of unpredictable changes in their environment. But they go further, stressing that flexible, adaptive development speeds evolution by letting small genetic changes give rise to vastly different--but still viable--organisms. Exploratory behavior is thus a critical component of their concept of "facilitated variation."

As an illustration of rule-based organization, Kirschner and Gerhart review the foraging of ants. Steven Johnson described this and other examples in his thought-provoking 2002 book, Emergence. Simply by following local rules and responding to the scent trails left behind by their predecessors, individual ants join to form major thoroughfares between a food source and their nest. No master planner guides their motions.

Similarly, in a developing animal, some cells may find themselves far from the nearest blood vessel. In response to the lack of oxygen, they secrete chemicals that encourage the growth of new capillaries nearby. And in the brain, the intricate wiring of nerve cells is guided in part by signals that they receive and transmit during certain periods of development.

It really has to be this way. Although it's true--and amazing--that the 558 cells of the roundworm C. elegans take up pre-ordained positions in the final creature, the cells in much bigger creatures like us simply can't all have designated roles in the final organism. For one thing, there's just not enough information in our 20,000 or so genes to tell every cell where to go on some genetic master plan. Instead, each cell has to have a degree of autonomy in dealing with new situations. For example, if one of your legs is stunted early on, the muscles, nerves, blood vessels, and skin will all adapt to its new size, rather than blindly proceeding with some idealized plan. Even in C. elegans, the fixed cellular arrangement mostly results from such adaptive behavior of individual cells.

If you're still not convinced, think of the offspring of a bulldog and a Great Dane, which will have a facial and body structure unlike either of its parents. But we are not even surprised that the blood vessels and muscles will successfully adapt themselves to this completely novel shape.

It makes perfect sense that creatures that use this adaptive process in their development would be more successful during evolution.

But the reverse is also true: this flexibility makes evolutionary innovations much easier. The repurposing of mammalian digits for a dolphin's flipper, a horse's hoof, or a bat's wing is much faster if only a few genes have to change to determine the new shape, and the others adapt in parallel. In concert with modular organization, development that is built on exploratory principles is critical to letting evolution explore radically new architectures in response to small genetic changes.

Wednesday, February 10, 2010

Brain-Machine Interfaces

I have a short news story about exchanging information between machines and people's brains, now online at the Communications of the Association for Computing Machinery. This is a difficult field to capture in a few hundred words. There's a lot of progress, but people are trying a lot of different approaches, and they're not all addressing the same problem.

For example, some people are hoping to provide much needed help to people with disabilities, while others see the opportunity for new user interfaces for games.

Naturally, people will be willing to spend a lot more for the rehabilitation. In addition, recreational use pretty much rules out (I hope!) any approach that requires surgically implanting something in the skull. Even the researchers who are exploring rehabilitation don't yet feel confident exposing people to the risk, because they can't be sure of any benefits. As a result, these studies mostly involve patients who have received implants for other reasons.

If surgery is ruled out, there are fairly few ways to get at what's going on in the brain. With huge, expensive machines, you can do functional MRI, but that doesn't look particularly practical. Both Honda and Hitachi are using infrared monitoring of blood flow, with impressive results. But the best established measurement is EEG, which measures electrical signals with electrodes pasted to the surface of the head.

One up-and-coming technique that I mention in the story is called ECoG, or electrocorticography. Like EEG, it measures the "field potentials" that result from the combined actions of many neurons. However, the electrodes are in an array that is draped over the surface of the brain (yes, under the skull), so the signal is much cleaner.

Finally there are approaches like Braingate that put an array of a few dozen electrodes right into the cortex, where they can monitor the spikes from individual neurons. 60 Minutes did a story a while ago that showed people using this technology to move a computer mouse.

If the implants are to be practical, they will need to be powered and interrogated remotely, not through a bundle of wires snaking through the skull. Many people are exploring wireless interfaces for this purpose, as described by my NYU classmate Prachi Patel in IEEE Spectrum.

Brain-machine interfaces can also run in either direction. My story dealt mostly with trying to tap the output of the brain, for example letting paralyzed people control a wheelchair or computer mouse. But input devices, such as artificial cochleas or retinas, are also proceeding quite rapidly. To my surprise, Rahul Sarpeshkar, who works on both directions, told me the issues are not that different.

My guess would have been that input to the brain can take advantage of the brain's natural plasticity, which will adapt it to a crude incoming signal. To usefully interpret the output of haphazardly placed electrodes, people need to do an awful lot of sophisticated processing of the signal, which can slow things down.

The toughest thing about this sort of story, though, is time. There's a lot of progress, but there's a long way to go. Once the proof of principle is in hand, there's still a lot of hard work to do, some of which may involve major decisions about basic aspects of the system. It's hard to communicate the progress that's being made without getting into a lot of details that are only interesting to specialists.

Even when it lets the blind see or the lame walk, writing about engineering is a hard sell for the general public.

Monday, February 8, 2010

Blinded Science

Medical researchers systematically shield their results from their own biases. Other scientists, not so much.

Medical science includes many types of studies, but the universally accepted "gold standard" is the randomized, controlled, double-blind, prospective, clinical trial. In this kind of study, patients are randomly assigned to either receive the treatment being tested or an ineffective substitute and then watched for their response, according to predefined criteria.

"Blind" means that the patients don't know whether they are getting the "real thing" or not. This is important because some will respond pretty well even to a sugar pill, and a truly effective drug must at least do better than that. This feature may not be so important in other fields, assuming that the rats or superconductors they study don't respond to this "placebo effect."

But the "double" part of "double blind" is also critical: in a proper trial, the doctors, nurses, and the researchers themselves don't know which patients are getting the real treatment until after the results are in. Without this provision, experience shows, they might treat the subjects differently, or evaluate their responses differently, and thus skew the conclusions. The researchers have a lot invested in the outcome, they have expectations, and they are human.

So are other scientists, I'm afraid.

It is surprising that most fields don't expect similar protection against self-deception. Sure, it's not always easy. In my Ph.D. work, for example, I made the samples, measured them, and interpreted the results. Being at the center of all aspects of the research helped keep me engaged and excited in spite of the long hours and modest pay, and was good training. But I also felt the pressure to get more compelling results.

These days, many fields already involve multidisciplinary collaborations that leverage the distinct skills of different specialists. Would it be so hard for the person who prepares a sample to withhold its detailed provenance from the person doing the measurement until the measurement is finished? With some effort, people could even hide information from themselves, for example by randomly labeling samples. No doubt it takes some of the immediate gratification out of the measuring process, but the results would be more trustworthy.

As recent events have made clear, trust is especially important in climate change. In October, Seth Borenstein, a journalist with the Associated Press, gave a series of measurements to four statisticians, without telling them what the data represented. In each case the experts found the same thing. When the data were revealed to be temperature measurements, each expert had found a long-term upward trend in global temperatures, with no convincing sign of the supposed cooling of the last ten years.

But why did it take a journalist to ask the question this way? Shouldn't this sort of self-doubt be built into all science at a structural level, not just assumed as an ethical obligation?

Experimental science will always be a human endeavor, I hope, so the results can never be completely divorced from expectations. But there are ways of making it more trustworthy.

Thursday, February 4, 2010

Fractal Biology

I first learned about the intermediate-dimension objects called fractals in the late 1970s, from Marvin Gardner's wonderful "Mathematical Games" column in Scientific American. One of the cool and compelling things they can do is explain is how highly branched circulatory system, with an effective dimension between two and three, could have an effectively infinite surface area, abutting every cell in the body, while taking up only a fraction of the body volume.

Twenty years later, Geoffrey West and his collaborators used this fractal model to explain the well known "3/4" law of metabolism, in which different organisms' resting metabolic rate varies as the 3/4 power of their body mass. West, an erstwhile theoretical physicist from Los Alamos who recently stepped down as the head of the delightfully eclectic Santa Fe Institute (for which I've done some writing), used similar scaling analyses of things for other aspects of biology as well as resource usage in cities.

Unfortunately, according to Peter Dodds at the University of Vermont, the well known 3/4 law is also wrong. In my latest story for Physical Review Focus, I briefly describe how Dodds uses a model of the branched network to derive an exponent of 2/3.

Interestingly, this 2/3 exponent is precisely what you'd expect from a simple computation of the surface to volume ration of any simple object. A 2/3 law for metabolism was first proposed in the mid 1800s, Dodds said, at "a tobacco factory in France, trying to figure out how much to feed their workers, based on their size. They asked some scientists and they said 'we think this 2/3 rule would make sense.'" Experimental data on dogs seemed to fit this idea.

But later experiments hinted at a slightly higher exponent. "At some point it became more concretely ¾," Dodds said, based on the work of Max Kleiber published in 1932. "He'd measured some things that looked like 0.75 to him. You know, he had nine or ten organisms, and it was easier on a slide rule." At a conference in the 1960s, scientists even voted to make 3/4 the official exponent.

But in the wake of the fractal ideas, Dodds and his collaborators re-examined the data in 2001. "What really amazed me was I went back and looked at the original data and it's not what people thought. People had sort of forgotten about it by that point." Instead, Dodds, found, the data really matched 2/3 better. At the very least, the 3/4 law was not definitive. This doesn't mean that the fractal description is not useful, only that it has a different connection to the metabolic rate.




From C.R. White and R.S. Seymour, Allometric scaling of mammalian metabolism, Journal of Experimental Biology 208, 1611-1619 (2005). BMR is resting metabolic rate. The best fit line has a slope (exponent) of 0.686±0.014 (95% CI), much more consistent with 2/3 than with 3/4.

Other authors have since supported this conclusion, especially if they omit big herbivores like kangaroos, rabbits, and shrews, whose resting metabolism is hard to measure. The real biological data is messy, and perhaps it is silly to expect a simple mathematical law to apply to diverse biological systems. In any case, the difference between the two exponents is modest, amounting to a factor of about 2.5 in metabolism over the range of experimental data in the plot.

Still, some experts, such as the commentators that I interviewed for the Focus story, still think that the 3/4 law is correct. But it seems plausible that many decades of experimental observations have been colored by researchers' expectations. Science remains a human endeavor.

Wednesday, February 3, 2010

Beautiful Data

There's an old joke where a scientist presents "representative data," when everyone knows it's really their best data. Like many jokes, there's a large measure of truth in it. (And like many science jokes, it's not actually funny unless you're a scientist.)

Audiences, whether in person or in print, like a good story. And it's best when data tell a story on their own, without further words of explanation. Scientists can handle tables of numbers better than most people, but, even for them, pictures tell the most compelling stories.

In fact, many experienced researchers begin preparing a new manuscript by deciding what figures to include. In part this is because figures take a lot of space in journals, but in addition many readers will go from the title straight to the figures, bypassing even the short abstract. If the pictures don't tell a good story, readers may just move on.

This is what it means when scientists say data are "beautiful": not that they have some intrinsic aesthetic appeal, but that they tell a good story about what's happening. Ideally, the story is compelling because the experiments have been done very well. But that's not always the reason.

Some of the most beautiful data I ever saw, in this sense, were presented at Bell Labs by Hendrik Schön in early 2002, at a seminar honoring Bob Willett and the other winners of that year's Buckley Prize. In contrast to Willett's painstaking work over many years elucidating the properties of the even-denominator fractional quantum Hall effect, Schön presented one slide after another demonstrating a wide variety of phenomena in high-mobility organic semiconductors.

The problem was that the story the beautiful data were telling was a lie. Schön's rise and fall were expertly described in Eugenie Reich's 2009 book Plastic Fantastic, and I was later on the committee that concluded that he had committed scientific misconduct.

But honest scientists also must be careful when they choose data that supports the story they want to tell. There is an intrinsic conflict of interest between telling the most compelling story and facing honestly what the data are saying. Decisions about which data to omit, and how to process the remainder, must be handled with great care.

As a rule, scientists overestimate their objectivity in selecting and processing data. As individuals, they are more swayed by expectations than they would like to admit. Collaborators can help keep each other honest, but only to a degree.

One thing that keeps science on track in these situations is that other people may make the same sort of measurements. Often different experimenters have a different idea of what's right, and the back-and-forth helps the field as a whole converge toward the truth.

But what happens when a whole field expects the same thing? There's a real danger that the usual checks and balances of scientific competition will break down, and all the slop and subjectivity of experiments will be enlisted in service of the common expectation.

Just because data is beautiful--that is, tells a good story--doesn't mean that story is right.

Tuesday, February 2, 2010

Tunable Flexibility

Some genes can't be substantially changed without fatal consequences, while tweaking other genes lets organisms gracefully adapt to new situations.

This modular organization, in which some groups of components maintain a fixed relationship to one another even as the relationship between groups changes, is common in biology. In their book, The Plausibility of Life, Marc Kirschner and John Gerhart argue that the weak linkages between unchanging modules are a critical ingredient of "facilitated variation," which in turn makes rapid, dramatic evolution possible. But this long-term adaptability may be, in part, a fortunate side effect of a system that lets individual organisms respond to changes during their lifetime.

Natural selection is not based on compassion. It would be reasonable if both essential genes--those within modules--and nonessential genes--including those forming the weak linkages between modules--mutated equally readily. A mutation within a module might kill its host, but that's the way of progress. Mutations of the linkages could survive, and, by altering the relationship between modules, allow a population to explore new innovations.

Evolution would be more efficient if genes within modules didn't mutate as quickly as those between them. But it's not required.

But what if the evolvability is just one facet of a more general flexibility? At a December 2009 meeting in Cambridge, MA, which I'm covering for the New York Academy of Sciences, Naama Barkai of the Weizmann Institute in Israel showed evidence that genes that evolve rapidly also show greater variability in expression.

To measure the rate of evolution, Barkai's postdoc Itay Tirosh compared different yeast species to see which genes had the most differences. Some genes differed a lot, while others were quite similar.

Tirosh also looked at two measures of the intrinsic variability of gene expression (measured by messenger RNA levels): the degree of change in response to changed conditions, like stress, and the time-dependent variation in expression, or noise. Again, some genes varied a lot, while others stayed quite steady. Moreover, the variable genes were likely to be the same ones that evolved rapidly.

These three correlated measures of gene flexibility were connected with differences in the structure of the promoter, which is the region of DNA near where its transcription into RNA begins. Flexible genes tended to include the well-known "TATA" sequence of alternating tyrosine and adenosine bases, as well as different arrangements of the nucleosomes.

A complete understanding of the role of flexibility in both short- and long-term variation will require a lot more research. But these results support the notion that arrangements that let organisms adapt to the slings and arrows of everyday life also give them the tools to rapidly evolve dramatically new ways of life.