Medical treatment in the California workers’ compensation now requires that the treatment pass the scrutiny of “evidence based medicine” which means that scientific studies support the safety and efficacy of the requested care. That might seem like a straight forward process.
But, an article in Scientific American claims that false positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years. The problem is rampant in economics, the social sciences and even the natural sciences, but it is particularly egregious in biomedicine. Many studies that claim some drug or treatment is beneficial have turned out not to be true. We need only look to conflicting findings about beta-carotene, vitamin E, hormone treatments, Vioxx and Avandia. Even when effects are genuine, their true magnitude is often smaller than originally claimed.
The problem begins with the public’s rising expectations of science. Being human, scientists are tempted to show that they know more than they do. The number of investigators – and the number of experiments, observations and analyses they produce – has also increased exponentially in many fields, but adequate safeguards against bias are lacking. Research is fragmented, competition is fierce and emphasis is often given to single studies instead of the big picture.
Much research is conducted for reasons other than the pursuit of truth. Conflicts of interest abound, and they influence outcomes. In health care, research is often performed at the behest of companies that have a large financial stake in the results. Even for academics, success often hinges on publishing positive findings. The oligopoly of high-impact journals also has a distorting effect on funding, academic careers and market shares. Industry tailors research agendas to suit its needs, which also shapes academic priorities, journal revenue and even public funding.
The crisis should not shake confidence in the scientific method. The ability to prove something false continues to be a hallmark of science. But scientists need to improve the way they do their research and how they disseminate evidence.
First, we must routinely demand robust and extensive external validation – in the form of additional studies – for any report that claims to have found something new. Many fields pay little attention to the need for replication or do it sparingly and haphazardly. Second, scientific reports should take into account the number of analyses that have been conducted, which would tend to downplay false positives. Of course, that would mean some valid claims might get overlooked. Here is where large international collaborations may be indispensable. Human-genome epidemiology has recently had a good track record because several large-scale consortia rigorously validate genetic risk factors.
Many scientists engaged in high-stakes research will refuse to make thorough disclosures. More important, much essential research has already been abandoned to the pharmaceutical and biomedical device industries, which may sometimes design and report studies in ways most favorable to their products. This is an embarrassment. Increased investment in evidence-based clinical and population research, for instance, should be designed not by industry but by scientists free of material conflicts of interest.
Eventually findings that bear on treatment decisions and policies should come with a disclosure of any uncertainty that surrounds them. It is fully acceptable for patients and physicians to follow a treatment based on information that has, say, only a 1 percent chance of being correct. But we must be realistic about the odds.