Friday, October 30, 2009

Conductance is Transmission

My latest story in Physical Review Focus describes measurements of electrical conduction between two "buckyballs," or C60
molecules. This sort of characterization is a prerequisite for the sort of understanding and control that would be needed for future "molecular electronics."

The electrical conductance (the inverse of the resistance) in such tiny systems is limited to values of the order of 2e2/h, where e is the electron charge and h is Planck's constant, which sets the scale for quantum phenomena. This combination goes by the name of "conductance quantum," or G0.

Unlike other quanta like photons, however, the conductance is often not generally required to come in discrete packets. Under special experimental circumstances, however, such as in "quantum point contacts," the conductance can take on reasonably stable values that are simple multiples of G0.

Still, the idea that conductance has special value was quite jarring when it became popular in the 1980s. Most materials have a well-defined conductivity determined by number of electrons and how frequently they scatter from imperfections of atomic motion. The conductance, which is just the total current divided by the voltage, is then calculated from the conductivity by multiplying by the cross-sectional area of a piece of material, and dividing by its length.

In very small devices, however, electrons move as a wave from one end to the other. The conductance is then determined by the likelihood that they propagate to the far end. The visionary IBM researcher Rolf Landauer laid the groundwork for this view in a 1957 article in the IBM Journal of Research and Development.

Only a quarter-century later in the 1980s, however, did experiments start to catch up. Researchers had been doing experiments at very low temperatures in clean semiconductor systems, where the electrons propagate cleanly as waves over many microns. Lithographic patterning can easily create structures that are smaller than this distance, and comparable to the wavelength of the electrons themselves (typically a few hundred angstroms, or a few hundredths of a micron).

In the semiconductor samples, electrons are free to flow only in a thin sheet near the surface. Researchers can apply a voltage a metal film on top of a semiconductor so that the electrons have to avoid the region under the metal. If there is a small gap in a line of metal, electrons can squeeze through this quantum point contact between them. This is the situation where the quantum effects become important.

The usual derivation goes like this (feel free to skip over this long paragraph): on the two sides, electrons fill up the available states equally, so filled states on one side face filled states on the other and have no way to move across. Applying a voltage V raises the energy of electrons on one side, so the top ones now face empty states on the other. The number of such exposed states is the energy change, eV, times the number of states in each energy interval. Here's the magic: the number of states, for the special case where they are one dimensional waves, is determined by how their energy E varies with wave vector k: dE/dk. Their group velocity--the rate at which they impinge on the contact--is (1/h) times dk/dE. Each carries a charge of e, and there are two electrons in each state because there are two spin states. Presto: the total current is eV(dE/dk)(1/h)(dk/dE)2e = 2e2/h x V.

To me it's rather unsatisfying to go through these shenanigans to get a simple answer like 2e2/h. It doesn't seem right that we have to introduce all these extra quantities just to have them cancel out. Is there an easier way to get to this answer?

In any case, it is now clearly established that each quantum state has an overall conductance of G0=2e2/h, multiplied by the transmission coefficient, which is the probability of a particular wavelike electron making it to the other side. This result applies to any quantum transmission, whether it's in an engineered semiconductor or a single C60 molecule.

Thursday, October 29, 2009

Junk or No Junk?

The phrase "junk DNA" is a hot button. Authors of press releases, news stories, and even some journal articles seem powerless to resist casting any new discovery of function in non-protein-coding DNA as overthrowing a cherished belief that most of the DNA is junk.

In contrast, bloggers including T. Ryan Gregory and Larry Moran regularly gripe that this framing, like many "people used to think x, but now…" stories, is misleading: biologists have known for decades that non-coding DNA contains important regulatory and other functional sequences. Nobody seriously thought it was all junk: that's just a myth that makes the story seem more exciting.

Still, most biologists agree that DNA is mostly junk.

John Mattick is not so sure.

In a 2007 paper in Genome Research, Mattick and his co-author Michael Pheasant, both of the University of Queensland in Australia, suggested that evolution could be sheltering much more than the 5% of the genome estimated by the ENCODE project and others. Those researchers estimate the background rate of neutral evolution by looking at sequences that they assume to be useless, such as "ancient repeats" left behind from long-ago genomic invasions.

Instead, if these sequences are a little bit useful, perhaps because they are occasionally drafted by the cell for other uses, then they would be slightly preserved during evolution. If this is the case, then other sequences that are also slightly preserved may be useful, too.

I discussed this issue with Mattick for my story for Science (subscribers only), but it was hard enough to capture the issues for strongly selected sequences, so this subject didn't make the cut. Mattick didn't claim that the issue is settled, only that the logic was in danger of being circular: "It is basically an open question. We have no good idea how much of the genome is conserved, except for that which is dependent on questionable assumptions about the nonconservation of reference sequence."

For him, the extensive transcription of the genome seen by ENCODE may not be a sign that RNA production is unselective, but that a large fraction of the DNA is serving some useful, if so far unknown, function.

Mattick described himself as a "minor author" among the scores on the ENCODE project. Ewan Birney, of the European Bioinformatics Institute in the U.K., played a coordinating role. But he doesn't strongly dispute Mattick's observations. "Ancient repeats provide a marker of evolution. They may very well be under some selection," Birney said. But "the striking thing," he stressed, "regardless about where the line is between selection and no selection, is that a lot of the functional regions are at the absolute lowest end of what we see across the human genome." They may be important, but not very important.

Or maybe the biochemical assays don't measure biological importance at all. "Many people instinctively feel," Birney said, "that all the functional elements really must be selected for in some sense. But there's alternative view, which is that there's just a big set of cases which are generated randomly, are perfectly assayable, when you assay them they're always there in that species, but in fact evolution doesn't select either for or against them. They are truly neutral elements: they are selected neither for nor against."

More philosophically, Birney doesn't find the evolutionary question to be central to short-term concerns about human health. "For disease biology we're interested in understanding the disease. We're not so interested in proving whether they're weakly under selection or something like that."

In fact, for both Birney and Mattick, the extensive biochemical activity of the weakly selected 95% of the DNA suggests its potential as a reservoir of "spare parts." Whether or not the long-term potential of that reservoir puts evolutionary pressure on its components, so that they are conserved, may not be the key issue. The important thing is that those partially-assembled genetic tools are ready to be called into action for future innovations.


 


 

Wednesday, October 28, 2009

ENCODE

The mapping of the human genome in draft form in 2000 was a turning point in biology. But the announcement really marked a start, rather than an end, of the practical use of genomic information.

Several large-scale projects in the intervening years have mined particular aspects of the genome and combined it with other sources of information. The HAPMAP project, for example, looked at how common single-nucleotide polymorphisms, or SNPs--changes in a single base--varied among a few selected populations. These studies formed the basis for genome-wide association studies to identify DNA regions associated with various diseases.

Another project, called ENCODE, for ENCyclopedia Of DNA Elements, focused on cataloguing the various types of protein-coding and regulatory structures in the genome. By correlating these with biochemical measures of functional activity and the way the DNA is organized in the nucleus, the researchers got a broad view of how the expression of genes is regulated. They also compared the DNA sequences with those of closely and distantly related organisms, to illuminate how the function of the DNA is related to its evolution.

With some 200 co-authors from 80 different institutions, the ENCODE project rivals some big particle-physics experiments for scope and complexity. In fact, the pilot phase of ENCODE selected "only" 1% of the genome--around 3 million bases--for detailed study. An overview of the results appeared in Nature in June 2007. The data are publicly available, and researchers continue to publish papers on aspects of the work. In addition, follow-up work is aimed at analyzing the entire genome.

Among the profound conclusions from the pilot phase that most of the genome is transcribed into RNA, even though only 1.5% or so codes for protein and only about 5% seems is clearly functional. In other words, much of the regulation of genetic activity may be occurring, not at the level of transcription, but at the level of RNA.

The researchers also found that the organization of the chromosomes in the nucleus, in particular the wrapping of the DNA around histones to form nucleosomes, predicts the locations where transcription begins. These results emphasize the known importance of the positions of nucleosomes in regulating genetic activity at different positions.

Some of the researchers looked at various measures of biochemical activity along the DNA, such as binding to proteins that are known to be active in regulation. Their hope was that these assays would serve to identify regions with a biological function in the cell.

Other ENCODE researchers compared the sequences with corresponding sequences from other organisms--both close relatives like mice and distant eukaryotic relatives like yeast. According to a longstanding assumption, the degree of similarity of these sequences, showing how resistant they are to changes from neutral evolution, should also reflect their biological importance.

These studies revealed two surprises. First, not all biochemically active sequences are evolutionarily constrained. This might mean that the biochemical tests don't measure things that are important to the cell after all. Second, and more puzzling, not all of the constrained sequences had any obvious function.

I wrote a story for Science this summer (subscribers only, sorry) discussing possible reasons why evolution and importance don't always track one another.

ENCODE and other large-scale studies will continue to supply us with extensive, detailed information about the genome. The story is only just beginning.

Tuesday, October 27, 2009

Climate Cover-Up

In their new book, Climate Cover-Up: The Crusade to Deny Global Warming, James Hoggan and Richard Littlemore waste little time wringing their hands about the reality or seriousness of the global warming threat. They dispense with this question quickly, showing that the essential features of carbon-dioxide induce warming have been known for over a century.

Although a devil might lurk in the details, the recent state of the science is captured by Naomi Oreskes' 2004 literature study in Science, which found zero dissenters from the consensus among 928 journal articles referencing "global climate change." Similarly, the 2007 Fourth Report of the Intergovernmental Panel on Climate Change, whose political charter leads it to avoid poorly understood possibilities like collapsing ice sheets, nonetheless states that "most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic [greenhouse gas] concentrations."

In contrast, a Pew Survey released last week concludes that only 36% of Americans think there is solid evidence that the Earth is warming because of human activity, down from an already low 47% over the past few years. Climate Cover-Up explores how it has come to pass that the public still thinks that this is an open scientific question. Hoggan and Littlemore describe the extensive, organized efforts to make it appear open, largely funded by corporations with much to lose from effective climate actions.

Hoggan is a public-relations professional who says that PR people have a duty to serve the public good. In 2005 he founded DeSmogBlog to highlight just the sorts of systematic distortions that the book catalogs, and Littlemore is editor at the site. Their highly readable book describes these efforts, and the funding behind them, with journalistic precision and documentation.

Their laundry list of deception includes "astroturf" groups that use sanitized corporate funds to present a "grass roots" appearance; "think tanks" that increasingly eschew analysis for promotion of policies that favor their sponsors, and petitions signed by scientists who are not or who have little or no expertise in climate. Hoggan and Littlemore systematically discuss these and other programs to frame the "debate" as one in which huge uncertainties remain--as if that should be a source of comfort.

Sowing doubt is a tried-and-true strategy for delaying government response. David Michaels' excellent 2008 book, Doubt Is Their Product, for example, and Devra Davis' 2007 The Secret War on Cancer related how the tobacco industry perfected this technique to delay serious government action against their product for decades. Some of the same firms are coordinating the skeptical response to climate change, and some of the same scientists, like Frederick Seitz and S. Fred Singer, have played roles in both controversies (despite having expertise in neither cancer nor climate).

Hoggan and Littlemore describe how these omnipresent figures benefit financially from their support of corporate needs, and they reveal the irrelevance of many of the "thousands" of signatories on some highly publicized petitions. But they don't address why other scientists--many intelligent and sincere--sign on to such statements. Why are researchers who have no expertise in climate, as well as members of the public, so willing to question those who do, when they would never presume to second guess articles about cancer treatments or particle theory?

In the end, though, these efforts have achieved their goal: keeping the journalistic treatment of global warming "balanced," unlike the clear trend in expert opinion. This problem was captured in a 2004 study by Maxwell T. Boykoff and Jules M. Boykoff, "Balance as bias: global warming and the US prestige press" which was published in 2004 in the journal Global Environmental Change, and in Chris Mooney's story, "Blinded by science: how 'balanced' coverage lets the scientific fringe hijack reality" in Columbia Journalism Review later that year.

This book is not likely to convince true skeptics of the seriousness of global warming. For those who understand the stakes, however, the book is a powerful inoculation to help recognize the conspiracy-theory talking points, most recently regurgitated by the authors of SuperFreakonomics, for the misinformation it is.

Where there's smoke, there's not always fire. Sometimes, it's a massive corporate-sponsored campaign to blow smoke.

Sunday, October 25, 2009

Reliability

A light bulb, floating over someone's head, has become a universal icon for a flash of insight. But the history of the incandescent bulb also shows that a good idea is not enough. The success of a new gadget often hinges on its ability to survive the rigors of the real world.

An electric current causes many materials to glow…momentarily. Glowing is a natural consequence of the "red-hot" temperatures produced by the current. But another consequence is reaction with oxygen in the air that promptly burns up the would-be filament. Protecting it in an air-free glass bulb is a key step toward practical electric light.

The filament material is also critical. The recently reopened Thomas Edison National Historical Site in West Orange, New Jersey recounts Edison's 1870s search through thousands of candidates, many involving carbonized threads of various sorts, including exotic materials like bamboo. Much of this selection process aimed at increasing the lifetime beyond the 15 or so hours of the first "successes." (British inventor Joseph Swan devised a similar device in Britain around the same time.)

Some 25 years later, Hungarian and Croatian inventors introduced the tungsten filaments like those we we use today, which last longer while providing more light.

The tungsten-halogen lamp extends the improvement. Its white-hot filament delivers much more light in the visible part of the spectrum, but over time even tungsten evaporates at these temperatures. Small amounts of halogens like iodine or bromine in the bulb react with the evaporated tungsten. The resulting halide migrates back to the filament where the heat decomposes it, leaving the tungsten back where it started, while the halogen goes on to pick up other stray atoms of tungsten. The result is a brighter, more efficient bulb.

Situations like this, where performance is directly improved by increasing the lifetime, occur frequently, for example in semiconductor electronics. In one example that I encountered while working in integrated-circuit technology a decade ago, making a transistor shorter improves its speed, both by increasing the electric field and by decreasing the distance electrons have to traverse. But a few of the more-energetic electrons cause atomic rearrangements that build up over time to change the transistor's properties and render it useless. The shortness of many transistors, and thus their performance, was directly limited by the need to avoid these "hot-electron effects."

The study of processes that lead to eventual failure goes by the somewhat misleading name of reliability. Typically, after an initially high failure rate, called "infant mortality," a batch of devices settles into a steady, low failure rate over time until some accumulating damage leads to eventual wearing out of all of the remaining devices.

Experiments on many similar devices are required to get a complete picture of the different ways they fail. Researchers need to know not just the median lifetime but its statistical distribution, to place strict limits on the number of possible failures. The statistics are also needed to guarantee that a complex circuit with many devices will continue to function.

Reliability engineers also need to quickly measure degradation with accelerated testing, for example at elevated temperature. They then extrapolate those results back to the milder conditions of ordinary wear and tear. For example, if you've owned your computer for a few years, its transistors have already been around much longer than the prototypes that were used to vet the latest manufacturing changes.

Confidently extrapolating wear-out times requires deep and accurate models of subtle, microscopic degradation mechanisms. As a result, reliability involves many fascinating physical phenomena, as well as an appreciation of statistics and of the ways that devices are put to practical use.

And as it does for the light bulb, this understanding can improve performance just as profoundly as a new invention can.


 

Thursday, October 22, 2009

Fictitious Forces

Gravity is a myth.

Not because, as bumper stickers tell you, "The Earth sucks." Rather, the "force" we call gravity is an illusion that arises from our own motion through space.

Physicists have known this for almost a century, but for some reason this simple and beautiful reality is withheld from everyone except select graduate students. I'm going to let you in on the secret.

You feel a force when your back presses against the seatback of an accelerating car. But you probably no longer perceive it as a force, knowing that it is really an artifact of the car's motion: your body is just trying to stay put, or at whatever speed it was already moving. The push of the seatback nudges you to a higher speed so you can keep up with the car.

Another fictitious force is the so-called centrifugal force that throws you outward in a turning car. If you looked down on this scene, you'd see that your body is just trying to keep moving in a straight line. The real force is the centripetal force that pulls you back toward the center of rotation, staying with the car.

These forces are fictitious. They only appear because an object's motion is compared with an accelerating reference (the car). The clue is that, if left alone, all objects accelerate the same way, independent of their mass.

Normal forces produce smaller acceleration for "heavier" objects--those with greater mass. The introductory physics equation F=ma captures this relationship between Force, mass, and acceleration.

Because the acceleration from a fictitious force is independent of mass, the apparent force must grow in proportion to the mass. Holding a toddler on your lap during a sharp turn is harder than holding a less massive soda can. This proportionality to mass is a clear signal that a force is fictitious: the objects are really just trying to move at a constant speed. It's the car that's accelerating. The equation for centrifugal "force," for example, includes the mass of the object, as it should.

Perhaps you can see where this is going: according to Newton, gravity is also proportional to mass in precisely this way. Unless this is a coincidence, this means gravity is a fictitious force. Over the years physicists have tested the coincidence idea by comparing the "gravitational" and "inertial" mass. They're always exactly the same.

It was Einstein who took the fictitious nature of gravitation seriously. He developed his theory of general relativity by starting with this equivalence principle: inside a small box, like an elevator car, there's no way to distinguish the force of gravity from acceleration of the car.

In this view, the natural state of all objects around you, as well as your own body, is to constantly accelerate downward. The reason you don't do that is that your chair is constantly pushing up on you, accelerating you upward relative to this natural state of motion.

The idea of "free fall" is easier to accept for an orbiting spacecraft or NASA's "vomit comet" on its parabolic arc. But once you accept the idea that you and everything around you are constantly accelerating upward, it's pretty simple, right?

One reason this isn't common knowledge is that pretending gravity is a force makes it easier to connect the falling of an apple on earth to what keeps the planets in their orbits, which is pretty profound, too.

For orbits, acceleration is in different directions, for example, on opposite sides of the earth. It took Einstein more than a decade to figure out how to stitch together different accelerations at different places, using the sophisticated mathematics of curved spaces that had only been developed in the 19th century. He also had to figure describe how massive (or energetic, thanks to E=mc2) objects warp space to create the curvature, without screwing up the interconnection between space and time that he had found in his theory of special relativity.

For most purposes, regarding gravity as a force gives the same, correct answer. But not always: small timing corrections from general relativity are critical for global positioning systems (GPS).

And isn't it interesting to imagine your chair accelerating you upwards?


 


 

Wednesday, October 21, 2009

Freedom (From Cancer) Is Not Free

For decades, the American Cancer Society has been a stalwart advocate of steps to reduce cancer risk, including early testing. In a fine story in today's New York Times (registration required), Gina Kolata reports that they are about to back off on those guidelines, for breast and prostate cancers.

The essential issue is that early screening can find small tumors that might never become a problem, or might even disappear on their own. For these tumors, biopsies, further tests, or treatments are an unnecessary financial burden and also a health risk. On the other hand, many rapidly growing tumors may become serious problems in the time between tests.

In a related article (subscription required) published tomorrow [sic] in the Journal of the American Medical Association, entitled "Rethinking Screening for Breast Cancer and Prostate Cancer," three doctors review the disappointing results of twenty years of early detection, and conclude:


"One possible explanation is that screening may be increasing the burden of low-risk cancers without significantly reducing the burden of more aggressively growing cancers and therefore not resulting in the anticipated reduction in cancer mortality."

These results underline the need for measuring the comparative effectiveness of all medical procedures. Not everything that seems like a good idea really is. For every patient whose aggressive early cancer is stopped in its tracks (and whose doctors will vividly remember the events), there are others for whom the trauma, health risk, and expense were unnecessary--and avoidable.

In his book, The Healing of America (which I reviewed here), T.R. Reid notes that the PSA (prostate-specific antigen) test that is routinely given to older men in the U.S. is not paid for by the Public Health Service in the U.K. (p. 120). No doubt this is partially a matter of cost effectiveness. But as his British doctor explained, it is also a matter of medical effectiveness. It may seem brutal to trade off the few lives saved by early testing with lives lost by unnecessary intervention, but such statistical comparisons are, for now, our only option.

Ultimately, though, we need better tests: tests that can identify the molecular or other markers that distinguish between aggressive tumors that people will die from and more passive cancers that people will die with. As the JAMA authors conclude, "To reduce morbidity and mortality from prostate cancer
and breast cancer, new approaches for screening, early detection,
and prevention for both diseases should be considered."

[Update: Paul Raeburn at the Knight Science Journalism Tracker notes that although other outlets covered this issue, Kolata is unique in projecting a revision from the American Chemical Society.]

[Update (11/7/09): Science-Based Medicine has a fantastic, detailed discussion of the science behind this issue. Short message: keep screening.]

Tuesday, October 20, 2009

Alternative Splicing

In 1977, researchers were surprised to learn that the protein-coding sequence of messenger RNA doesn't arise from a continuous section of DNA.

Instead, work that earned Phil Sharp and Richard Roberts a Nobel in 1993 found that the as-transcribed pre-mRNA includes sections called introns that are then cut out of the sequence while the remaining exons are spliced back together (the words can apply to either DNA or RNA).

The final protein-coding section is straddled on both ends, called 5' and 3', by untranslated regions (UTRs). These noncoding regions are also transcribed from the DNA, but aren't usually described as exons. But their sequence still matters: out in the cell, the 3' UTR is a favorite target for complementary microRNAs that affect the stability or translation of the messenger RNA.

Additional processing steps in the nucleus add to the spliced-together RNA a trademark chemical cap at its 5' end and a tail of repeated adenylenes at its 3' end. Both the cap and the polyadenylated tail are important for the later translation of the mature mRNA at ribosomes, once it has been exported from the nucleus.

A further wrinkle was the realization that the splicing can happen in different ways, as illustrated in the figure (from Wikipedia), which connects by blue lines the pieces that can be neighbors in the final RNA. The multiplicity of possible proteins resulting from this alternative splicing significantly increases the number of protein products available from a given stretch of DNA. Most human proteins occur in more than one splicing arrangement, called an isoform.


The splicing is done by a large complex of RNA and proteins called the spliceosome. The choice of isoform depends in part on special RNA sequences, either within an exon or an intron, that bind proteins that promote or inhibit splicing at a particular point. This binding is sensitive to sequence changes that don't change the coded amino acid and are therefore called "synonymous." Because of splicing, these changes aren't always synonymous: they change the final protein.

In addition, the relative amounts of the alternatively spliced isoforms can change, for example, during development of an organism in different tissues, notably the brain. The regulation of this process provides yet another tool for controlling gene expression, but scientists are still clarifying what determines the splice configuration.

To get a global view of alternative splicing, Chris Burge of MIT, at a conference that I covered last year, described a technique called mRNA-seq that preferentially sequences short RNA segments that contain the polyadenylated tail, and are therefore proper mRNA candidates for later translation. (This eliminates the confusing background of transcribed RNA that is useless or acts in other ways.) Using this technique, he and his colleagues identified more than 10,000 sequences that coded for multiple isoforms. Of these, Burge estimated that more than 2/3 were present in different amounts in different tissues, so they different forms seem likely to be doing important things.

Burge also found a surprising connection between the processes that add the polyadenylene tail and those that do splicing. The former process occurs at the end of transcription, but it looks as if the RNA is already being handed off to the splicing machinery before transcription is finished.

Although it is still poorly understood, alternative splicing is an important and widespread mechanisms for regulating genes, as well as for getting multiple proteins out of a single region of DNA.

Monday, October 19, 2009

Blogstorm Warning: SuperFreakonomics

Is all publicity good publicity? Stephen Dubner and Steven Levitt, the authors of the bestselling book Freakonomics and the like-named blog, may find out. I, for one, have shelved any inclination to buy their new book, SuperFreakonomics: Global Cooling, Patriotic Prostitutes, and Why Suicide Bombers Should Buy Life Insurance, after seeing the blogosphere's reaction to its climate-change chapter.

The battle was joined in this post by the vocal climate-change activist (and fellow MIT physics PhD) Joe Romm. Romm followed up here, here, here, and here, giving some credence to the idea that he has an axe to grind. But in light of the wide audience the book is likely to get, a preemptive takedown might be justified.

Among Romm's claims is that the Steves misrepresent the views of Ken Caldeira, the Stanford ecologist who has been promoting research into geoengineering approaches to global warming. Romm quotes the book as saying of Caldeira, "his research tells him that carbon dioxide is not the right villain in this fight." Meanwhile, Caldeira's web page prominently features this statement: "Carbon dioxide is the right villain," says Caldeira, "insofar as inanimate objects can be villains." Sounds pretty clear.

Romm may not be entirely clean, though. Dubner claims that Romm goaded Caldeira into disavowing the book's characterization. Roger Pielke Jr., who has had his own battles with Romm, regards him as a liar. It does look as though Romm may have compromised his credibility, although his latest post makes a good defense.

But even if Romm made mistakes, it doesn't make those in SuperFreakonomics any more excusable. It is simply disingenuous to claim, as Dubner does, that the "Global Cooling" in the subtitle is supposed to refer to geoengineering solutions, not to the canard that there was a scientific consensus in the 1970s that climate was in danger of cooling. As painstakingly tabulated by Brad DeLong, this is just one of a whole host of misleading or mistaken statements in the book. Paul Krugman also takes issue with the Steves' take on a particular economics case for early action.

Krugman sums up the problem with Freakonomics' trademark contrarianism:

Clever snark like this can get you a long way in career terms — but the trick is knowing when to stop. It's one thing to do this on relatively inconsequential media or cultural issues. But if you're going to get into issues that are both important and the subject of serious study, like the fate of the planet, you'd better be very careful not to stray over the line between being counterintuitive and being just plain, unforgivably wrong.

Saturday, October 17, 2009

What's In a Name?

I was surprised to see a fresh flurry of news stories in the last few days, more than a month and a half after two papers about ostensible magnetic monopoles in spin ices were posted online (although they just came out in print.)

I want to say one word to you. Just one word.

Are you listening?

Magnetricity.

Apparently one of the teams behind the monopole experiments has a new letter to Nature (with an accompanying News and Views article) measuring the total magnetic "charge" in a model-independent way using muons. (See the lifted graphic for a little explanation.)

The researchers adapted a venerable technique for measuring the charges of ions in electrolyte solutions. In a magnetic field, opposite monopoles drift in opposite directions, and the muons sense the field that results from their separation. Seems like a nice experiment.

But by calling the effect "magnetricity," the scientists insured themselves breathless coverage. It's great marketing, although as far as I can tell they did not measure the magnetic analog of an electric current as claimed by some news stories. They measured the separation that results from the current, but not the current itself.

According to the news release,

Dr Sean Giblin, instrument scientist at ISIS and co-author of the paper, added: "The results were astounding, using muons at ISIS we are finally able to confirm that magnetic charge really is conducted through certain materials at certain temperatures – just like the way ions conduct electricity in water."

Now you might not think that the lethargic drifting of magnetic "charges" that only exist in a very special crystal would form the basis of a new information technology, especially when you realize that "certain temperatures" are about a degree above absolute zero. After all, drifting electric charges in electrolytes (like batteries) are only important because they liberate truly mobile electrons in attached wires that do the real work.

But for researchers who only last month claimed to discover a fundamental particle predicted by Dirac, is revolutionizing electronics too much to ask? According to New Scientist,

Bramwell speculates that monopoles could one day be used as a much more compact form of memory than anything available today, given that the monopoles are only about the size of an atom.

"It is in the early stages, but who knows what the applications of magnetricity could be in 100 years time," he says.

I think I might be able to guess.

[Other stories at Physics World(the best one I saw), The Times, BBC, (did I mention the researchers were from the U.K.?), Popular Science, and Next Big Future.]

Friday, October 16, 2009

Neutral Evolution

We all know the power of natural selection to drive evolution toward more successful characteristics, or phenotypes. At the level of DNA, however, most changes convey neither advantages nor disadvantages--they are neutral. Random drift can happen at the phenotype level as well, but at the genotype level it predominates.

The indifference of evolution to the vast majority of molecular changes was described by Japanese geneticist Mooto Kimura in 1968. Mutations that improve fitness are very rare, he argued. Those that make an organism worse off are more common, but if they are truly detrimental they will never be passed on. Most mutations, however, will make little or no difference to survival. These neutral mutations will be free to accumulate at random.

In the simplest mathematical model, mutations arise in each nucleotide of DNA with constant probability each generation. A larger population will include proportionally more mutations at each position. For a mutation to become "fixed," however, it must spread through the entire population, rather than dying out. The larger the population, the less likely this is; the fixation probability that is inversely proportional to the population size. Because the product is a constant, the rate at which mutations become fixed at any position doesn't depend on population size. (For a very clear recent introduction to this model in the context of geographically heterogeneous populations, see the July 2009 article in Physics Today by Oskar Hallatschek and David R. Nelson.)

The constant accumulation of fixed mutations underlies the powerful "molecular clock" technique, which allows researchers to estimate how recently two species diverged from one another by counting the number of differences that have accumulated in corresponding DNA sequences. Although the actual rates are more variable than would be expected from this model, molecular clocks provide a powerful quantitative window into our evolutionary past and into relationships between living species.

The background accumulation of mutations over evolutionary time also makes it possible to discern sequences that don't change. As a rule, this happens when changes in those sequences prevent survival or reproduction, so they never accumulate: they are constrained during evolution. Researchers often regard constraint (or the related property of sequence conservation) as a sign that a particular section of DNA has a critical function, even if they don't yet know what that function is.

The best known examples of the two types of mutation are those in sequences that code for amino acids in proteins. Mutations that change the amino acid can destroy the function of the protein, so they will be constrained if the protein itself is important. Researchers like Arend Sidow at Stanford have shown that the sequence is highly constrained in the active sites of proteins but less so in less critical regions.

By contrast, because the genetic code is redundant, some mutated groups of three bases still specify the same amino acid, so the protein chain will be unchanged. These "synonymous" mutations are often used to calibrate the background mutation rate. However, the exact base sequence can still have an effect, for example changing the sequences preferences among different alternative splicing arrangements of the final RNA.

Although constraint and function often go together, there are exceptions, as I discuss in my July 10 story in Science (subscribers only, but I have a pdf in the Clips section of my website, encrypted with the password "monroe"). Sometimes important sequences don't seem to be constrained, and sometimes constrained sequences don't seem to be important. Understanding when and why this happens is important as researchers look for new functions in the 98.5% or so of the human genome that doesn't code for proteins, which includes microRNAs and other regulatory regions.

Thursday, October 15, 2009

A Missing Layer

Models of biological networks have always had gaps, but they are bigger than most researchers realized.

Only in the past few years have biologists begun to recognize the extensive regulatory role of naturally occurring small RNAs. The best known of these endogenous RNAs are chains of 21-23 nucleotides with the rather unfortunate designation of "microRNA" (whose abbreviation, miRNA, is awkwardly similar to the mRNA used for messenger RNA). MicroRNAs arise from sections of DNA whose RNA transcripts contain nearly complementary mirror-image sequences, and so naturally fold back on themselves to form a "stem-loop" structure. Processing by a series of protein complexes liberates one strand from the overlapping section and incorporates it into specialized RNA-protein complexes in the cytoplasm that modify protein production.

Traditionally, systems biologists aiming to unravel gene-regulation networks have relied on the wealth of data from microarrays that measure the mRNA precursors of proteins. By regarding the mRNA abundance as a proxy for the corresponding protein, and looking at how various mRNA levels change with cellular conditions, the researchers construct hypothetical networks of interacting genes. In the graphical representations of these networks, genes are connected by a line or "edge" if the protein product of one seems to act as a transcription factor to change the activity of the other.

MicroRNAs complicate this picture dramatically, although many researchers don't yet incorporate them. Improved tools, often directly sequencing of millions of fragments rather than matching pre-chosen sequences in microarrays, let researchers survey the small RNAs in the cell. In many cases, as in the work of Frank Slack of Yale University described in my latest eBriefing for the New York Academy of Sciences, microRNAs control cellular processes in much the same way as traditional protein transcription factors--in Slack's case extending the lifespan of worms. These regulatory RNAs are a previously unsuspected layer of genetic regulation.

Some of the RNA-protein complexes promote degradation of messenger RNA that is complementary to the bound miRNA. In this case the remaining mRNA could still be a good indicator of a gene's activity, although not of its original transcription. Sometimes accounting for the miRNA might require only a change in the mathematical relationship between genes.

Many miRNAs, however, as well as some transcription factors, act as "master regulators," generating coordinated activity among scores of genes. As a result, these master regulators can effectively change one genetic network into an entirely different one--adding or removing edges. For example, researchers have constructed networks for cancerous cells in which the connections differ markedly from those for their healthy counterparts. Such context-dependent networks may be simple and accurate in specific situations, but they obviously lack important ingredients.

In other cases, a miRNA can continuously vary the activity of genes, rather than being a simple on/off switch. Again, such coordinated response will be hard to capture unless the hidden factor is explicitly identified.

A second type of RNA-protein complex slows (or less often speeds) the translation of complementary messenger RNA. One profound implication is that the measured levels of mRNA may no longer be a good proxy for the levels of the protein produced from it. Indeed, in the few cases where researchers have done the experiments, they have found only weak correlations between the levels of mRNA and the corresponding proteins. Any procedure that depends on these levels being equivalent is on thin ice.

In addition to these quantitative effects, qualitatively new behavior appears when molecules are connected in feedback loops. Systems biologists have catalogued the action of many interesting "motifs." Even two molecules, for example, can act to stabilize concentration--if the feedback around the loop is negative--or act as a bistable switch--if the loop feedback is positive. Clearly, if such motifs are acting in the hidden mRNA layer, no tweaking of the gene-gene interactions will replicate their effects.

All of this reinforces the need for researchers to develop and use as many high-throughput techniques as possible to measure different types of RNA as well as the different states of proteins in cells. The "reverse engineering" of networks never seemed easy. Now it's clear that it's even harder than it seemed.

Wednesday, October 14, 2009

Short RNAs in Stress and Longevity

My latest eBriefing for the New York Academy of Sciences covers a day-long May 12 meeting on "Short RNAs in Stress and Longevity." (The web publication was delayed by redesign and reorganization disruptions at the academy).

The sponsor, the Non-coding RNA Biology Discussion Group, used to call itself the RNA Interference Discussion Group. Its new name more accurately reflects the diverse regulatory and other roles of short RNAs. Many of these roles are far from being completely understood, and this meeting touched on several of them.

The combination of stress and longevity may seem like an odd pairing. But to the extent that many organisms have a built-in, switchable longevity program, stresses like starvation, which make living longer more attractive than reproducing, can activate it. The stress response is an entire field of its own, and stresses like heat shock are well known to produce major shifts in gene expression, inducing production of proteins like chaperones that help cells cope.

Frank Slack of Yale University and Ramanjulu Sunkar of Oklahoma State University explored two fields where researchers have extensively studied gene regulation by traditional protein-based mechanisms. For longevity in the worm C. elegans and for stress responses in plants, respectively, they saw the effects of naturally-occurring short RNAs (microRNAs and their plant relatives), and changed these responses by manipulating the short RNA levels. As in other fields, these studies are revealing a critical layer of gene regulation that has been overlooked until quite recently.

Germano Cecere works in the lab of Alla Grishok of Columbia University, who had found that short RNAs can regulate not just messenger RNA translation and degradation, but also its initial transcription from DNA. To find new examples, Cecere searched for short RNAs that are involved with chromatin remodeling, and identified many that play a role in stress and longevity.

Cells under stress often develop specialized complexes of proteins and RNA, known as stress granules, which may host some of the regulatory reactions or store low-priority messenger RNA. Anthony Leung of Phil Sharp's lab at MIT described how a particular polymer known for its role in DNA processes might act as a scaffold for these granules or even help regulate their activity.

Irina Groisman of the André Lwoff Institute dissected protein complexes that associate with the poly-adenylated tails found in mature messenger RNA. These complexes enforce the tradeoff between cellular senescence, which is related to longevity, and cancer.

Beyond the regulatory sequences that typically contain twenty-something bases, larger RNA can also process information on its own. Evgeny Nudler of NYU, who had identified riboswitches that respond to metabolites, described a different 600-nucleotide-long RNA that is the temperature-sensing element in the heat-shock response. This RNA sensor binds with the translational elongation factor eEF1A (which can also interact with non-heat stresses) to generate the response.

As this breathless summary hints, it's a challenge to combine such disparate topics into a coherent writeup. In this case, with just six talks, I gave up on aligning the talks into common themes and simply summarized each one separately. This diversity shows how wide open the field of RNA remains, going far beyond its traditional functions as messenger RNA, transfer RNA, and ribosomal RNA.

Tuesday, October 13, 2009

Aharonov-Bohm Effect

Thomson Reuters speculated that the 2009 Nobel Prize for Physics might go in part to Yakir Aharonov of Chapman University, in Orange County, California. Fifty years ago, Yakir Aharonov and his thesis adviser David Bohm devised an astonishing experiment that showed that electrons can sense a magnetic field without passing through it (which had, unknown to them, been described a decade earlier by Werner Ehrenberg and Raymond Siday).

The experiment--since demonstrated experimentally--requires a long coil or solenoid. The results depend on the magnetic field threading along the axis of the coil, not on any fields leaking out the sides or ends.

When electrons are shined at the coil, they reveal their wave nature by passing on both sides simultaneously. On the far side, the chances of an electron appearing at a particular position depends on the relative "phase," which is the difference in the number of wavelike oscillations for electrons going on either side. If the peaks from one side match the troughs from the other, few electrons will be seen, even if the wave from each side alone is strong. This interference effect is well known for other waves, like light.

What quantum mechanics predicted, and experiments confirmed in great detail, is that the relative phase is directly proportional to the magnetic field passing through the solenoid. The surprising thing is that this is true even though on neither side do the electrons pass through any magnetic field. They respond to the field between the paths, at a place where they never go.

Now a mathematical digression: It is customary to describe this result using the "vector potential." As a vector field, this quantity has both a magnitude and a direction at each point in space. The vector potential is related to the magnetic field, while its "scalar" counterpart is related to the electric field. But the exact values of the potentials are somewhat arbitrary, so in classical physics they are regarded as poor cousins of the fields. That's less true in quantum mechanics.

The arbitrariness is easiest to describe for the scalar, or electrostatic, potential, which is closely related to voltage. The electric field is the negative of the gradient, or slope, of this potential. The field is large where the potential varies rapidly with position and points in the direction where the potential decreases fastest. But there are an infinite number of potentials that give the same field, because shifting the potential by the same constant everywhere doesn't change how rapidly it varies in space.

The arbitrariness of the vector potential that determines the magnetic field is more subtle, because their relationship is more subtle. The magnetic field is the "curl" of the vector potential, which is how much it swirls around in a circle(the field points in the direction perpendicular to the swirling). This means that you can add to the vector potential any vector field that has no curl, in what is called a gauge transformation, and the magnetic field will be the same.

Quantum mechanics stipulates that the momentum of the electron (and thus its inverse wavelength) should be corrected by the addition of the vector potential (times a constant). A convenient form (gauge) for the vector potential outside a solenoid is one that everywhere points tangentially along concentric circles. For electrons on one side, this adds to the phase, and on the other side it subtracts, causing the experimentally measured phase shift of the Aharonov-Bohm effect. The size of the relative phase shift depends on the vector potential.

It is common to conclude that in quantum mechanics, in contrast to classical physics, the vector potential is "more real" than the magnetic field. I regard this conclusion as misguided. The phase shift can only be observed by interference between complete paths that pass on opposite sides of the solenoid, which reflect the total phase shift around a loop (one that encloses the solenoid). Because the magnetic field represents the swirliness of the vector potential, this change around a loop is just equal to the total magnetic field passing through the loop, or in this case the solenoid.

Another clue that the vector potential is not really important is that its value changes with different choices of gauge, but the results do not.

So what is a better way to think about the Aharonov-Bohm effect? I like to think about that wonderful print by M.C. Escher called Ascending and Descending. As one passes around the path, there is no clue that anything unusual is happening. But on completing a circuit, it is apparent that something is different. Similarly, a magnetic field changes something subtle (the quantum-mechanical phase) of any electron that passes around it. But that hardly makes the effect less mysterious.

[Note: the September 2009 Physics Today has a story on Aharonov-Bohm effects.]


 

Friday, October 9, 2009

Deadly Mutant Bugs from Space!

Do we have a destiny to explore space, or should we leave it to expendable but increasingly capable robots? This perennial debate was inflamed by the recent conclusions of the Augustine Commission that the current budget was woefully inadequate for getting people to Mars.

But what about the science? Last month, NASA released a report describing more than 100 science experiments done on the International Space Station over the past eight years. I was surprised to see that "advances in the fight against food poisoning" were listed first among the accomplishments:

"One of the most compelling results reported is the confirmation that the ability of common germs to cause disease increases during spaceflight, but that changing the growth environment of the bacteria can control this virulence."

I wrote about the original research for Scientific American (subscribers only) in 2007. If this is the poster child for space research, maybe we should stay home.

Don't get me wrong, this is interesting and surprising research. But several aspects of the work deflate its global significance.

First, note the word "confirmation": The researchers had already demonstrated, in labs on Earth, the increased virulence of Salmonella Typhimurium, which causes food poisoning. They did this by building a special chamber to simulate the microgravity environment. It's important to confirm the results in real space flight, but it didn't really show any surprises.

You may wonder why there would be any effects at all from gravity, which is a very weak force. The electrostatic force between two electrons, for example, is more than 1042 times stronger than the gravitational force at the same distance.

Of course, if you fall off a building, gravity is plenty strong. But if a bacterium fell off a building, it would just float away. The strength of gravity is proportional to the mass of an object, and thus to its volume. As nanotechnologists know from painful experience, other forces like surface tension and fluid viscosity--which depend on surface area, not volume--become much more important as things get smaller. For a micron-sized bacterium, these forces are perhaps a million times larger, relative to gravity, than in a meter-sized person. So the bacterium simply can't directly detect the difference between a really tiny gravity force and none at all.

What the bacterium can detect is the flow of the surrounding fluid. Gravity (through convection) is one of many things that helps stir things up. (Growing crystals in this quiescent environment has often been invoked as another reason to do science in space.) So if you construct a special chamber where the other stirring is absent (as the researchers did), then a little gravity makes a difference.

What kind of difference? Some news stories at the time talked about microgravity causing mutations. This is just wrong. What happened was that the new, ultrastill environment switched the bacteria into a new way of expressing the genes they already had, turning some on and some off. This made them more virulent, by a factor of three, to chickens.

Why would this happen? Lead researcher Cheryl Nickersen speculated to me that the ultrastill microgravity environment might resemble the sheltered environment the bacteria ordinarily encounter, for example, in remote nooks and crannies of the digestive tract. The new expression profile could reflect the ordinary switch they make as they move from the rough-and-tumble of the outside world and the churn of the stomach and prepare to do their dirty work.

The researchers found some active genes that are normally associated with formation of the dense mats known as biofilms. Microgravity could help jumpstart this process by switching their expression ahead of time--although that might also make the critters less successful getting to the intestine in the first place.

So low gravity creates a quiescent fluid (which can also be recreated in the laboratory) that mimics normal conditions that cause salmonella to activate its natural program to settle in for the long haul as a biofilm. This is all interesting, and could be useful. In fact, a company called Astrogenix is now touting the space research as a route to a salmonella vaccine, and has sent further missions on the shuttle to test it.

But doesn't it seem like there might be more direct (and cheaper) ways to learn these things?


 

Thursday, October 8, 2009

Are the Nobel Categories Obsolete?

Considering that they've been around for more than a century, the Nobel science categories of "physics," "chemistry," and "physiology or medicine" have held up pretty well. But in truth, much of their durability reflects the fact that the committee hasn't worried much about their precise definitions. Nor have they paid much attention to Alfred Nobel's requirement that the prizes go to "those who, during the preceding year, shall have conferred the greatest benefit on mankind." (Care to make a case for the cosmic microwave background, anyone, other than that understanding the universe is inherently "beneficial" to mankind?)

Still, as noted by Doug Natelson, some people will regard this year's physics prize as an injustice, because both the fiber and CCD achievements are primarily engineering, not physics. In fact, both Kao (with processing experts from Corning and Bell Labs) and Boyle and Smith had previously gotten the Draper Prize, which is often described as the "Nobel Prize of Engineering," as had 2000 Physics Nobel Laureate Jack Kilby, one of the inventors of the integrated circuit (William Noyce had died by the time of the Nobel).

The conflation of physics and engineering raises two issues. On the one hand, some physicists will justifiably ask what right the Nobel Committee has to give their prize to people who aren't even doing real physics. I partially agree with Doug that this is an elitist attitude that devalues the real intellectual contributions of engineers. But what Kao did was engineering and materials science, and what Boyle and Smith did was electrical engineering. That doesn't mean it's inferior. It just doesn't happen to be physics.

On the other hand, some engineers will justifiably ask what right the Nobel Committee has to give credit to physics for accomplishments that were made by engineers. This sort of award reinforces the conceit that any transformative technologies must derive from basic science. The reality is that many of the technologies that are changing our world, like google or Wikipedia or eBay or iPhones or even cell-phone cameras, have little need for fundamentally new science--they rest on clever, imaginative, solid engineering.

The other science prizes have an even bigger mismatch, but for them it reflects the excitement and importance of biology, which has no prize of its own. "Physiology or Medicine," for example, has long been dominated by fundamental biology, which more and more relies on molecular biology. This trend leads to a collision with the "Chemistry" prize, which has also been increasingly dominated by molecular biology (not even biochemistry, really). Many prizes might fit equally well in either category, while other exciting fields are left out entirely.

The growing number of category-defying prizes reflects the reality that many exciting discoveries today lie between disciplines or in collaborations between disciplines, as the chemists, physicists, electrical engineers, materials scientists, biologists, and others who contribute to "nanotechnology" can attest. At the same time, some mature areas of physics and chemistry have really solved most of their interesting problems. Their practitioners have valuable skills and insights, but their traditional topics may not be offering the interesting challenges they did in the past. The Nobel categories are only a symptom of a larger issue in academic research that rewards research that fits with centuries-old disciplines.

The Last Ten Physics Nobels (Bold indicates those that are arguably engineering)

The Last Ten Chemistry Nobels (Bold indicates those that are arguably biology)

Wednesday, October 7, 2009

Ribosomes

The 2009 Nobel Prize in Chemistry was awarded to Venkatraman Ramakrishnan, Thomas Steitz, and Ada Yonath for their elucidation of the structure of ribosomes and how that structure promotes accurate translation of messenger RNA sequences into the amino acid sequences of proteins.

Ribosomes are the granddaddies of the ribonucleoprotein machines in the cell--big enough to be customarily granted organelle status even though they don't have a membrane. The bacterial version of the complex, for example consists of two large parts, denoted 30S and 50S to represent how fast they separate out of a suspension. Each of these subunits contains many proteins (20 and 33, respectively), together with large "ribosomal" RNA chains.

With the assistance of transfer RNA, ribosomes translate messenger RNA sequences (previously transcribed from DNA in the nucleus) into a corresponding amino-acid sequence or polypeptide, which will be folded and processed into a mature, functioning protein. In light of this critical and intricate task, it should probably not be surprising that the ribosome is very similar in widely different species. However, bacterial, archaeal, and eukaryotic ribosomes are rather different, and the differences in the ribosomal RNA were used by Carl Woese in the 1970s as the evidence for the highest-level classifying of life forms into these three broad domains.

Streptomycin, tetracycline, and about half of current antibiotics preferentially disrupt bacterial protein synthesis by attacking the bacteria-specific versions of the ribosome. The details of the ribosome structure can guide researchers who hope to develop new types of antibiotic molecules.

The three researchers and their collaborators all studied the structure of ribosomes by x-ray crystallography. This structure revealed specific details about how transfer RNA, with its individual matching amino acid cargo, nestles into the ribosome, and how the amino acid forms a covalent bond with the growing polypeptide. (The Nobel Committee notes that the charge-coupled devices that garnered this year's physics prize have made such studies much more productive.) Researchers have used these and other studies to clarify the entropy and energy that drives this synthesis.

A critical aspect of quality control in protein synthesis is the "proofreading" that ensures that the RNA sequence in the transfer RNA is indeed complementary to that of the messenger RNA. In 1974, John Hopfield (then at Princeton and Bell Labs) proposed that a multi-step ratchet that repeatedly checks the match while expending energy could be more selective than depending on the rather weak thermodynamic preference for a match. The increasingly refined structures revealed by the prize winners, together with other experiments, have confirmed how this proofreading works in the real ribosome, achieving an amazingly low error rate of about one error in 10,000 amino acids.

Tuesday, October 6, 2009

Optical Fiber

One half of the 2009 Nobel Prize in Physics goes to Charles Kao for his contributions to the development of low-loss optical fiber.

Although free-space transmission had been proposed by Alexander Graham Bell (and such optical techniques as smoke signals and mirrors were used for communication even earlier) optical signals did not become technologically important until the 1970s. The laser provided the needed bright light source, but sending light beams through the air or evacuated tubes never became widespread. Optical fiber, by avoiding the natural spreading of the beams and by letting them be routed like electrical signals in a wire, made widespread optical communication, including undersea transmission, possible.

Many researchers contributed to the development of practical fiber. Early studies had demonstrated the principles of total internal reflection that allowed light to be guided down gently curving paths, and the usefulness of an outer cladding to keep light from leaking out into the surroundings. But in the late 1960s, the attenuation in optical fibers would have prevented signals from being sent more than a few tens of meters.

Charles Kao helped to elucidate intrinsic loss mechanisms in silica (SiO2) fibers. Inherent density fluctuations in the fiber cause Rayleigh scattering, which increases for higher-frequency light (which is why the sky looks blue and the setting sun looks red). On the other side, low-frequency light is directly absorbed by atomic vibrations in the material. The best transmission occurs for intermediate frequencies where each of these processes is relatively unimportant, which for silica is in the near infrared region of the spectrum. If impurities could be removed from silica fibers, Kao showed, this material could be much clearer.

A major advance came from researchers at Corning, who in 1970 made very clear fibers using chemical-vapor deposition from very pure ingredients. This technique lowered the loss to a few dB per kilometer, making long-distance transmission feasible. Bell Labs later developed a modified deposition process that further reduced the loss from tiny amounts of residual hydrogen in the fibers. Previous prizes, like the Draper Prize, have often included the Corning and Bell Labs contributions to making fiber communication practical. Modern fibers have losses of around 0.2dB per kilometer, meaning that a very useful 1% of the original light power will travel 500km down the fiber, an astonishing degree of clarity for a solid material.

Researchers have explored many variations on the silica fiber over the years. For example, a single crystal core might reduce density fluctuations, while avoiding a material without oxygen would have fewer high-energy vibrations and less absorption. Cheaper materials like plastic can make useful fibers for carrying light over a few meters. But for long-distance transmission, no material has displaced the silica that was championed by Charles Kao.

Charge-Coupled Image Sensors

One half of the 2009 Nobel Prize in Physics is shared by Willard Boyle and George Smith for the charge-coupled device (CCD) image sensor. Smith says they invented the CCD in 1969 at Bell Labs in Murray Hill, New Jersey, to give their semiconductor device organization leverage against a competing organization's magnetic bubble technology. (In a bit of Bell Labs dirty laundry, Gene Gordon says in this 2000 interview that he was originally listed with Boyle and Smith on the patent, and that he "never could understand why Bill Boyle was listed.") Although the magnetic technology never became important, the CCD became the mainstay of digital imaging for decades.

Electronic devices detect light when it liberates electrons, either into vacuum (as in a photomultiplier) or within a semiconductor. A key challenge is that these photogenerated electrons must outnumber the background "dark current." This becomes more difficult when light levels are low, as in many scientific experiments, or when individual pixels are made small so that they receive few photons. The CCD addresses this problem by electrically isolating the light-detecting region of the semiconductor from the detection circuitry, so it can accumulate electrons for seconds or longer if necessary. To read out the accumulated charge, the device uses a "bucket brigade" that efficiently passes the charge from detector to detector along a row. Circuitry at the end of the row measures the charge from each "bucket" in sequence as it arrives.

Since its invention, CCD technology has been used in thousands of scientific experiments, and it helped to establish the consumer digital camera industry. In the past decade or so, however, technologists have developed imagers based on the mainstream CMOS (complementary metal-oxide-semiconductor) technology by improving both the device design and the manufacturing process. These CMOS imagers are cheaper and can be integrated onto a single chip with the processing electronics, for which CMOS is standard. CCD technology is still preferred for the highest quality images and the lowest light levels, such as those in the Hubble space telescope.

Over the years, researchers have considered CCD technology as a replacement for CMOS in low-power electronics, but it has not had wide impact in that application.

Monday, October 5, 2009

Medicine Nobel

The 2009 Nobel Prize in Physiology or Medicine was awarded to Elizabeth Blackburn, Carol Greider and Jack Szostak, for their unraveling the special role of the ends of chromosomes, and how they are maintained. The tips of the chromosomes, called telomeres, help ensure the eventual senescence of cells--a process that breaks down in cancers.

In the early days of DNA, as scientists clarified the molecular mechanisms of its replication, they realized that the ordinary step-by-step copying process would not function all the way to the end of the chain. Over time, it seemed, the DNA would get shorter and shorter, progressively infringing on coding sequences near the end of the chain. Szostak and Blackburn discovered that the ends contained a repeated six-base sequence, CCCCAA. A cap of proteins bind to this sequence and protects the tips--Blackburn has likened them to the plastic tips that keep the ends of shoelaces from fraying. Blackburn and her then student Greider discovered the enzyme, telomerase, that recognizes and extends this sequence to prevent continued erosion of the genetic information during division.

Only a few cells normally produce telomerase, however. It had been discovered by Leonard Hayflick that many cells, grown in culture, only divide a fixed number of times--now known as the "Hayflick limit," after which they enter an extended period of senescence. This observation suggested a built-in program for aging that might limit longevity even in multicellular organisms. Researchers quickly realized that the shortening of the telomere during DNA replication provided a natural mechanism for this limit to the number of cell divisions.

Some ordinary cells, like those that lead to sperm and eggs, naturally make the telomerase that restores the telomeres. The enzyme is also produced by many cancer cells, which is one of the reasons that they are able to evade the usual limitations on cell division.

The role of telomerase in preventing cellular senescence suggested to many researchers that the enzyme might also arrest aging in complex creatures like ourselves. The biotech company, Geron, for example, was founded in 1992 in the hopes of exploiting telomerase against aging. Of course one of the dangers of such an approach would be that it might remove protections against cancers. On the other hand, researchers have sought drugs that suppress telomerase as potential anti-cancer agents.

The reality of aging is, not surprisingly, more complicated, and telomerase has not proven to act like the mythical fountain of youth. Geron has moved on to other pursuits, notably stem cells. The current wave of excitement about anti-aging centers on completely different drugs aimed at activating the same molecular pathways as severe caloric restriction, which has long been known to extend life even in mammals.

Even though telomeres and telomerase have not released us from aging, however, their discovery has clarified important aspects of cellular division and programmed senescence, and stimulated new approaches to drug development.

A very good book that discusses the Hayflick limit, telomeres, Geron, caloric restriction and much more is Stephen S. Hall's 2003 Merchants of Immortality.

Thursday, October 1, 2009

Turn, Turn

Everyone knows that a 360° rotation brings an object back where it started.

But sometimes it doesn't.

Here's a demonstration you can do right now--and you should, because this trick is too strange to believe unless you do it yourself.

Take a small object. It doesn't much matter what it is--a business card, a pen, a water glass--just something you can keep vertical as you rotate it in the horizontal plane.

Hold out your right hand, palm up. Use your left hand to lower the object into the grasp of the fingers and thumb of the right hand. This hand and the object will move together from here on.

First rotate the object counterclockwise (looking down) by moving your elbow to your right and your hand to your left. Keep going as the object passes under your arm. To finish a full rotation, if you're joints are like mine, you'll have lift the object up to face level.

At this point, the object has completed a full rotation, and is oriented the way it started. But your contorted arm is telling you that not everything is the same!

Obviously one way to return to the starting point is to reverse the motion that got you here. But there's another way: keep going.

Your arm may object to the idea of becoming even more twisted, but bear with me. Holding the object over your head, keep rotating it in the same direction you were turning it before. This time, though, instead of your arm passing over it, your arm will pass under it.

If you did this right, when you completed the second full rotation, both the object and your arm were now back in their comfortable starting position. Cool, huh? Once you get good at it, you can do it with a partially-filled glass of water, which should convince you that you aren't flipping it over at some point in the motion.

The trickiest part of this trick is that it's not a trick.

This is a little-known property of three-dimensional space: although one full rotation leaves a disconnected object unchanged, two full rotations leave objects unchanged even if they are connected to the (non-rotating) rest of the world. It's not a property of your arm. You can connect the object to its surroundings with as many rubber bands as you like, and you will always be able to untangle them after two full rotations.

Is this just a strange factoid? Maybe. But consider that many elementary particles, notably protons, neutrons, and electrons, have this same property: it takes two full rotations to bring them back where they started. Not only that, but swapping any two of these particles leaves a clear quantum-mechanical signature. (I once saw the mathematical proof that the rotation and swapping properties are intimately connected in an advanced physics class. I was very proud to understand it. For a few hours.)

Not to get all new-agey, but these properties make a lot more sense if you envision electrons as embedded in the larger universe, rather than as independent particles floating freely in space.