Showing posts with label probability. Show all posts
Showing posts with label probability. Show all posts

Friday, November 13, 2009

Lies, Damn Lies, and…

Statistics don't lie. People do.

I have the greatest respect for statisticians, who methodically sift through messy data to determine what can confidently and honestly be said about them. But even the most sophisticated analysis depends on how the data were obtained. The miniscule false-positive rate for DNA tests, for example, is not going to protect you if the police swap the tissue samples.

One of the core principles in clinical trials is that researchers specify what they're looking for before they see the data. Another is that they don't get to keep trying until they get it right.

But that's just the sort of behavior that some drug companies have engaged in.

In the Pipeline informs us this week of a disturbing article in the New England Journal of Medicine. The authors analyzed twenty different trials conducted by Pfizer and Parke-Davis evaluating possible off-label (non-FDA-approved) uses for their epilepsy drug Neurontin (gabapentin).

If that name sounds familiar, it may be because Pfizer paid a $0.43 billion dollar fine in 2004 for illegally promoting just these off-label uses. As Melody Peterson reported for The New York Times and in her chilling book, "Our Daily Meds," company reps methodically "informed" doctors of unapproved uses, for example by giving them journal articles on company-funded studies. The law then allows the doctors to prescribe the drug for whatever they wish.

But the distortion doesn't stop with the marketing division.

The NEJM article draws on internal company documents that were discovered for the trial. Of 20 clinical trials, only 12 were published. Of these, eight reported a statistically significant outcome that was not the one that was described in the original experimental design. The authors say "…trials with findings that were not statistically significant (P≥0.05) for the protocol-defined primary outcome, according to the internal documents, either were not published in full or were published with a changed primary outcome."

A critical reason to specify the goals, or primary outcome, ahead of time is that the likelihood of getting a statistically significant result by chance increases as more possible outcomes are considered. In genome studies, for example, the criterion for significance is typically reduced by a factor that is the number of genes tested, or equivalently the number of possible outcomes.

None of this would be surprising to Peterson. She described a related practice in which drug companies keep doing trials until they get two positive outcomes, which is what the FDA requires for approval.

By arbitrary tradition, the numerical threshold for statistical significance is taken as a 5% or less chance that an outcome arose by chance (P-value). This means that if you do 20 trials you'll have a very good chance of getting one or more that are "significant," even if there is no effect.

A related issue arose for the recent, highly publicized results of an HIV/AIDS vaccine test in Thailand. Among three different analysis methods, one came up with a P-value of 4%, making it barely significant.

This means is that only one in twenty-five trials like this would get such a result by chance. That makes the trial a success, by the usual measures.

But this trial is just one of many trials for potential vaccines, most of which have shown no effect. The chances that any one of these trials gave a positive result is much larger, presumably more than 5%.

In addition, the Thai vaccine was expected to work by slowing down existing infection. Instead, the data show reduced rates of initial infection. Measured in terms of final outcome (death), it was a success. But in some sense the researchers moved the goalposts.

Sometimes, of course, a large trial can uncover a real but anticipated effect. It makes sense to follow up on these cases, recognizing that a single result is only a hint.

Because of the subtleties in defining the outcome of a complex study, there seems to be no substitute for repeating a trial, stating a clearly defined outcome. Good science writers understand this. It would be nice to think that the FDA did, too, and established procedures to ensure reliable conclusions.

Monday, November 9, 2009

The Wisdom of Ignorance

Physics and math demand a certain mode of thought. Lots of people think that doing well in those classes takes intelligence, and that's part of it. But they also require something else that is not always a good thing: comfort with abstraction, or stripping problems down to an idealized cartoon.

Those of us who excelled in these subjects can be a bit smug towards those who didn't, but replacing real life with a cartoon isn't always a good thing. In addition to hindering social relations, it can obscure important truths.

It's interesting to contrast Aristotle's view, for example--that objects in motion naturally come to rest--with Newton's--that they naturally keep moving. Thinking about familiar objects, you have to grant that Aristotle had a good point. Of course, he'll leave you flat: if you figure out how to include friction, Newton is going to get you a lot further--even to the moon. But beginning students are asked to commit to an abstract formalism that has a stylized and flawed relationship to the world they know.

Probability has a similar problem.

Most normal people, for example, expect that after a flipped coin shows a string of heads, tails is "due." Probability theory says otherwise: the coin doesn't "know" what happened before, so the chances on the next flip are still 50/50. Abstraction wins, intuition loses.

[Actually, Stanford researchers showed in 2007 (pdf here) showing that unless a coin is flipped perfectly, the results will not be 50/50: if the coin is spinning in its plane at all, its angular momentum will tend to keep it pointed the way it started out. But that's a minor issue.]

On the other hand, there are lots of cases where common intuition is "directionally correct." Take another staple of introductory probability courses: a bag full of different-colored balls. In this case, the probability won't stay the same unless you put each ball back in the bag after you choose it. If you keep it, choosing a ball of one color will increase the chances of a different color on the next pick, in keeping with intuition.

Of course, intuition doesn't get the answer with any precision, and it gets it completely wrong for the coin flip. To do it right, you need the abstract formalism. Still, it's easy to imagine that our brains are hard-wired with an estimating procedure that gets many real-world cases about right.

In other cases, our intuition is more flexible than slavish devotion to calculation. Suppose I start flipping a coin. It's not surprising to see heads the first time, and the second time. How about the third time? The tenth? If it keeps coming up heads, you will quickly suspect that there's a problem with you original assumption that the probability is 50%. Your natural thought processes will make this shift naturally, even if you might be hard pressed to calculate why. Probability theory is not going to help much when the assumptions are wrong.

It's true that people are notoriously bad at probability. ScienceBlogger Jason Rosenhouse has just devoted an entire book to one example, the "The Monty Hall Problem: The Remarkable Story of Math's Most Contentious Brain Teaser. (It was also discussed in 2008's The Drunkard's Walk, by Leonard Mlodinow, and in The Power of Logical Thinking, by Marilyn Vos Savant (1997), the Parade columnist who popularized it.)

The Daily Show's John Olliver amusingly explored how simple probability estimates (especially at around minute 3:20) help us misunderstand the chances that the Large Hadron Collider will destroy the world.

Still, our innate estimation skills developed to deal tolerably well with a wide variety of situations in which we had only a vague notion of the underlying principles. Highly contrived, predictable situations like the coin flip would have been the exception. Even though our intuition frequently fails in detail, it helped us survive a complex, murky world.