Six years ago, I wrote a story for Bloomberg News about an interesting research review that looked at which studies of antidepressants such as Prozac, Paxil and Zoloft got published in medical journals and which didn’t. The review found that almost every clinical trial that got published in a medical journal—a whopping 94 percent of them—had positive findings, meaning they showed the drugs worked. Those were the studies that got published; those were the studies that doctors and patients could turn to for guidance.
But did they accurately represent the conclusions of all the studies of a particular drug that had been conducted? Hardly. The psychiatrist who led the review, a former FDA medical officer named Erick Turner, knew about another source of information on completed studies: the application packet that drugmakers submit when they seek to get a medication approved. This trove encompassed all completed studies assessing a drug’s effectiveness, including those that were never published in journals.
When Turner, an associate professor of psychiatry at Oregon Health and Science University in Portland, looked at the full sets of data for antidepressants, he found that about half of the studies concluded the drugs were effective at treating depression and half of them didn’t. Bottom line: studies showing that antidepressants worked got published; studies showing they didn’t went unpublished and few people knew they existed.
The study caused a bit of a stir. Jeffrey Drazen, the editor of the New England Journal of Medicine, where Turner’s review was published, told me then it was evidence of “publication bias”—the tendency for positive, but not negative, findings to make their way into print. Despite the best efforts of journal editors to publish a balance of findings, Drazen told me, “what's reported is really a much more rosy situation than actually exists.”
OK, you may be saying, but this is six-year old news, and by now things must have changed. Two recent studies suggest that when it comes to publication bias, in fact, things have not changed much at all.
Back in 2008, many people assured me change was coming and transparency was taking hold. After all, Congress, just the year before, had passed legislation requiring researchers to release results of all trials on clinicaltrials.gov, a website run by the National Institutes of Health, within a year of completion. The editors of major medical journals had banded together and said they would refuse to publish the results of any study that hadn’t been listed on the trial site from the time the study began.
Yet last month, Erick Turner and some colleagues released a new analysis, in the journal JAMA Psychiatry, looking this time at drugs used to treat anxiety disorders—and they found almost the same results. Nearly every study (40 of 41) that the FDA determined to be positive ended up getting published. Of the studies the FDA determined weren't positive, almost half didn't get published; those that did were largely published in ways that conflicted with the FDA's findings.
“When the news is good, drug companies are going to see to it that the world knows about it and get the data published,” Turner told me. “If the news is bad, the data is not quite as forthcoming.”
By 2013, under pressure from a raft of bad publicity and lawsuits, most major drug companies had signed on to a commitment made by their trade association to make available to “qualified” researchers detailed data from completed clinical trials for drugs that are approved and on the market. A number of companies have also begun posting data summaries from some clinical trials on their websites.
Eric Peterson, director of the Duke Clinical Research Institute, says things are improving incrementally and the public now has access to more information about studies that are underway. Yet a study he co-authored, published last month in the New England Journal of Medicine, found that most results of clinical trials are still going unreported.
Peterson and his colleagues combed through some 13,000 clinical trials, most sponsored by drug or medical device companies, to see if they complied with the new legislation requiring them to post results on the NIH website. The study found that that only 38 percent of studies completed between 2008 and August, 2012 had reported results by September, 2013. Even fewer, 13 percent, had posted their results within the one-year time limit established by Congress.
“There is still a problem here,” Peterson says. “Evidence can be hidden or misconstrued and the doctor and patient can be misled. But now we know more about it.”
Peterson wants all clinical trial research to be made fully public, not just the conclusions. “Open access is the ultimate solution,” he says.
Before he began working at the FDA in 1998, Turner treated patients as a private psychiatrist and “naively believed all the literature I was reading. My assumption was the drugs work and when they don’t, there’s something unusual about the patient.” Once he started working at the drug agency, he got to see a lot of negative studies that the public never saw and realized his assumptions had been wrong.
Today, Turner says, doctors and patients are more skeptical about the results of industry-sponsored research. That’s not all bad, since healthy skepticism can help people evaluate medical claims more carefully. The problem for the drug companies is that now even well-designed and well-run clinical studies with positive outcomes are likely to be viewed with distrust. And in that way, it has created a new type of postmodernist mindset as doctors and the public puzzle over who and what they can believe.
Shares