Monday, February 8, 2010

Blinded Science

Medical researchers systematically shield their results from their own biases. Other scientists, not so much.

Medical science includes many types of studies, but the universally accepted "gold standard" is the randomized, controlled, double-blind, prospective, clinical trial. In this kind of study, patients are randomly assigned to either receive the treatment being tested or an ineffective substitute and then watched for their response, according to predefined criteria.

"Blind" means that the patients don't know whether they are getting the "real thing" or not. This is important because some will respond pretty well even to a sugar pill, and a truly effective drug must at least do better than that. This feature may not be so important in other fields, assuming that the rats or superconductors they study don't respond to this "placebo effect."

But the "double" part of "double blind" is also critical: in a proper trial, the doctors, nurses, and the researchers themselves don't know which patients are getting the real treatment until after the results are in. Without this provision, experience shows, they might treat the subjects differently, or evaluate their responses differently, and thus skew the conclusions. The researchers have a lot invested in the outcome, they have expectations, and they are human.

So are other scientists, I'm afraid.

It is surprising that most fields don't expect similar protection against self-deception. Sure, it's not always easy. In my Ph.D. work, for example, I made the samples, measured them, and interpreted the results. Being at the center of all aspects of the research helped keep me engaged and excited in spite of the long hours and modest pay, and was good training. But I also felt the pressure to get more compelling results.

These days, many fields already involve multidisciplinary collaborations that leverage the distinct skills of different specialists. Would it be so hard for the person who prepares a sample to withhold its detailed provenance from the person doing the measurement until the measurement is finished? With some effort, people could even hide information from themselves, for example by randomly labeling samples. No doubt it takes some of the immediate gratification out of the measuring process, but the results would be more trustworthy.

As recent events have made clear, trust is especially important in climate change. In October, Seth Borenstein, a journalist with the Associated Press, gave a series of measurements to four statisticians, without telling them what the data represented. In each case the experts found the same thing. When the data were revealed to be temperature measurements, each expert had found a long-term upward trend in global temperatures, with no convincing sign of the supposed cooling of the last ten years.

But why did it take a journalist to ask the question this way? Shouldn't this sort of self-doubt be built into all science at a structural level, not just assumed as an ethical obligation?

Experimental science will always be a human endeavor, I hope, so the results can never be completely divorced from expectations. But there are ways of making it more trustworthy.

No comments:

Post a Comment