Reports of science’s demise have been greatly exaggerated.
Any scientist who reads the above New Yorker article from Dec. 13 should get a big laugh. It professes to reveal the disturbing phenomenon in which scientific “truths” become clearly established in the literature through replication, then mysteriously disappear over time as more studies are done which fail to find the same effect, or if they do, find steadily weaker effects. It profiles a psychologist, Jonathan Schooler, who became intrigued with this phenomenon after he discovered it in his own research. He did a few studies which showed a strong psychological effect; when he tried to replicate those experiments later, he failed to do so, and realized that his original work was not as ironclad as he had naturally assumed at the time.
The author of the article, Jonah Lehrer, states:
For many scientists, the [phenomenon] is especially troubling because of what it exposes about the scientific process. If replication is what separates the rigor of science from the squishiness of pseudoscience, where do we put all these rigorously validated findings that can no longer be proved?
This is, of course, a straw man. The phenomenon is not at all troubling to someone who understands what science actually is, which is self-corrective. The “decline effect” presented as an amazing phenomenon across many scientific disciplines is exactly what science is all about.
But what is science anyway? Most people trying to answer this question will invoke the scientific method, but this isn’t quite right. The ideas on the scientific method articulated by Karl Popper are much more philosophical than prescriptive. It seems that at least some of his ideas were developed in response to the obvious subjectivity of Sigmund Freud’s work in psychology. (Unfortunately, to this day, psychology is generally more conjecture than science, because of the difficulty in reliably establishing most of what goes on inside the human brain.) Philosophers, like everyone else, may have their own agendas in the development of ideas.
The best way to describe science is as a way to construct a model of how the world works. Real scientific “truth” is an evasive target, and nearly impossible to establish when (easily manipulated) statistical methods – which after all are based only on high and low probabilities – are used to test an idea, as is the norm, particularly in biology and medicine. Instead, what scientists are really doing is testing not the truth of an idea, but whether or not it is a good model for the world.
An example from Lehrer’s article is fluctuating asymmetry (which although is discussed in the ecological literature, is not really about ecology, but psychology – it’s all about the mysteries of what goes on in a female brain when she chooses her male mate). This was the idea that females prefer symmetrical males as a signal of their superior genes, which was all the rage in the early 90′s but has been pretty well refuted as a generalizable concept. For awhile, this seemed to be an acceptable model for mate choice in animals. But, the idea was actually pretty crazy, looking back on it: females somehow have the ability to detect differences in wing length (for example) of a millimeter or less, and that such a difference will affect the fitness of her offspring. Continued experimentation over 8-10 years showed it to be a bad model. There certainly may be individuals who will argue for that model until their dying day, but science has functioned completely as it should in discarding it.
Darwinian evolution is also a model. In its case, further experimentation has continued to show it to be a good model for some types of evolution (there are other models that explain other types of evolution, such as drift and assortative mating). Does that mean it has somehow been “proved” as fact? In the purest sense of philosophical “truth”, no. In all practical ways of looking at the world, absolutely yes. It is by far the best model we have for much evolutionary phenomena that we observe, and continues to be predictive. (Anti-evolutionists, of course, believe that the existence of models other than natural selection is an argument against evolution in general. They apparently haven’t thought through that there is nothing inconsistent about evolution having multiple mechanisms.)
So the idea that the scientific method is somehow failing us because of bias is of course hogwash. Any biased idea in the literature will be eventually discredited, because science does not have an ideology.
But the core problem that the article starts with is the real one: that drugs and other medical protocol are often approved using bad science (a major theme of this blog). There is a lot of bad science in the medical literature, but that is not as big a problem as the fact that drugs are distributed or recommended to the public before the scientific process has a chance to refute bad results:
In 2005, [John] Ioannidis published an article in the Journal of the American Medical Association that looked at the forty-nine most cited clinical-research studies in three major medical journals. Forty-five of these studies reported positive results, suggesting that the intervention being tested was effective. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.
To be fair to researchers, this problem may not be quite as bad as it seems. The FDA has more than once rushed drugs to market against the recommendations of its advisory panel, which is tasked with evaluating the science. Schooler, who investigated his own biases resulting in changing experimental results, now joins many scientists in advocating a relatively easy way to provide at least a partial check on the strong influences that bias a lot of medical studies: the establishment of a protocol database in which researchers must register their experimental methods before the experiment is conducted, and the results after. This would allow outside scientists to challenge and help correct flawed methodology (which routinely gets by reviewers and editors, who also often have a stake in the results). But perhaps most important, it would go a long way toward preventing the burying of negative results, a common practice that heavily skews the data available for review by the FDA.
The Atlantic had a somewhat similar article focusing on medical research in November.
See an interesting philosophical discussion on Karl Popper and his views at Jacob Scriftman’s blog. Wikipedia also has a good discussion of Popper’s philosophy and criticisms of it.