Sunday, September 20, 2015

Ioannidis - Why His Landmark Paper Will Stand The Test of Time















John P. A. Ioannidis came out with an essay in 2005 that is a landmark of sorts.  In it he discussed the concern that most published research is false and the reasons behind that observation (1).  That led to some responses in the same publication about how false research findings could be minimized or in some cases accepted (2-4).  Anyone trained in medicine should not find these observations to be surprising.  In the nearly 30 years since I have been in medical school - findings come and findings go.  Interestingly that was a theory I first heard from a biochemistry professor who was charged with organizing all of the medical students into discussion seminars where we would critique research at the time from a broad spectrum of journals.  His final advice to every class was to make sure that you kept reading the New England Journal of Medicine for that reason.  Many people have an inaccurate view of science, especially as it applies to medicine.  They think that science is supposed to be true and that it is a belief system.  In fact science is a process, and initial theories are supposed to be the subject of debate and replication.  If you look closely in the discussion of any paper that looks at correlative research, you will invariably find the researchers saying that their research is suggestive and that it needs further replication.  In the short time I have been writing this blog asthma treatments, the Swan Ganz catheter, and the diagnosis and treatment of acute bronchitis and acute chronic obstructive pulmonary disease are all clear examples of how theories and research about the old standard of care necessarily change over time.  It is becoming increasingly obvious that reproducible research is in short supply.

Ioannidis provided six corollaries with his original paper.  The first 4 regarding power, effect size, the greater the number of relationships tested, and the greater the design flexibility are all relatively straightforward.  The last two corollaries are more focused on subjectivity and are less accessible.  I think it is common when reading research to look at the technical aspects of the paper and all of the statistics involved and forget about the human side of the equation.  From the paper, his 5th Corollary follows:

"Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u.  Conflicts of interest are very common in biomedical research [26], and typically they are inadequately and sparsely reported [26,27].  Prejudice may not necessarily have financial roots.  Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings.  Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure.  Such non-financial conflicts may also lead to distorted reported results and interpretations.  Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable [28]"  all from Reference 1.

The typical conflict of interest arguments that are seen in medicine have to do with financial conflict of interest.  If the current reporting database is to be believed they may be considerable.  A commentary from Nature earlier this month (5) speaks to the non-financial side of conflicts of interest.  The primary focus is on reproducibility as a marker of quality research.  They cite the facts that 2/3 of members of the American Society for Cell Biology were unable to reproduce published results and that pharmaceutical researchers were able to reproduce the results from 1/4 or fewer high profile papers.  They cite this as the burden of irreproducible research.  They touch on what scientific journals have done to counter some of these biases, basically checklists of good design and more statisticians on staff.  That may be the case for Science and Nature but what about the raft of online open access journals who not only have a less rigorous review process but in some cases require the authors to suggest their own reviewers?  A central piece of the Nature article was a survey of 140 trainees at the MD Anderson Cancer Center in Houston, Texas.  Nearly 50% of the trainees endorsed mentors requiring trainees to have a high impact paper before moving on.  Another 30% felt pressured to support their mentors hypothesis even when the data did not support it and about 18% felt pressured to publish uncertain findings.  The authors suggest that the home institutions are where the problem lies since that is where the incentive for this behavior originates.  They say that the institutions themselves benefit from the perverse incentives that lead to researchers to accumulate markers of scientific achievement rather than high quality reproducible work.  They want the institutions to take corrective steps toward research that is more highly reproducible.

One area of bias that Ioannidis and the Nature commentators are light on is the political biases that seem to preferentially affect psychiatry.  If reputable scientists are affected by the many factors previously described how might a pre-existing bias against psychiatry, various personal vendettas, a clear lack of expertise and scholarship, and a strong financial incentive in marshaling and selling to the antipsychiatry throng work out?  Even if there is a legitimate critic in that group - how would you tell?  And even more significantly why is it that no matter what the underlying factors - it seems that conspiracy theories are the inevitable explanations rather than any real scientific dispute?  Apart from journalists, I can think of no group of people who are more committed to their own findings or the theory that monolithic psychiatry is the common evil creating all of these problems than the morally indignant critics who like to tell us what is wrong with our discipline.  Knowing their positions and in many cases - over the top public statements why would we expect  them sifting through thousands of documents to produce a result other than the one they would like to see?  

I hope that there are medical scientists out there who can move past the checklists suggested to control bias and the institutional controls.  I know that this is an oversimplification and that many can.  Part of the problem in medicine and psychiatry is that there are very few people who can play in the big leagues.  I freely admit that I am not one of them.  I am a lower tier teacher of what the big leaguers do at best.  But I do know the problem with clinical trials is a lack of precision.  Part of that is due to some of Ioannidis' explanation, but in medicine and psychiatry a lot has to do with measurement error.  Measuring syndromes by very approximate means or collapsing some of the measurements into gross categories that may more easily demonstrate an effect may be a way to get regulatory approval from the FDA, but it is not a way to do good science or produce reproducible results. 


George Dawson, MD, DFAPA




References:  

1:  Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124

2:  Moonesinghe R, Khoury MJ, Janssens ACJW (2007)  Most Published Research Findings Are False—But a Little Replication Goes a Long Way. PLoS Med 4(2): e28. doi:10.1371/journal.pmed.0040028

3:  Djulbegovic B, Hozo I (2007)  When Should Potentially False Research Findings Be Considered Acceptable? PLoS Med 4(2): e26. doi:10.1371/journal.pmed.0040026

4:  The PLoS Medicine Editors (2005) Minimizing Mistakes and Embracing Uncertainty. PLoS Med 2(8): e272. doi:10.1371/journal.pmed.0020272

5:  Begley CG, Buchan AM, Dirnagl U. Robust research: Institutions must do theirpart for reproducibility. Nature. 2015 Sep 3;525(7567):25-7. doi: 10.1038/525025a. PubMed PMID: 26333454.


4 comments:

  1. Thank you for the reminder that most medical research is poor, not just psychiatric. And in light of studies like Paxil 329 it seems that universities (like Brown) and academic publishing empires are every bit as culpable as Big Pharma. And their margins are bigger too.

    ReplyDelete
  2. George,

    I recall you pointing out this paper to your readers: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3123637/pdf/1747-5341-6-8.pdf for a review of antidepressant efficacy. I see now that it is a reply to the same Ioannidis who wrote the paper you mention in this post. Do you think he was mistaken on antidepressants?

    ReplyDelete
    Replies
    1. Excellent memory Simon - just when I though nobody read this blog.......

      First off a disclosure. You might recall it but others reading this may not. I solicited both articles as a managing editor for this journal. I was pleased and surprised to receive the articles from all of the authors. On this blog I have also linked to Leucht, et al subsequent work on the comparisons of medications used for psychiatric and general medical disorders and and how they are comparable.

      I think both sets of authors are right given their analysis and the data they considered. You may have picked up on the theme that I think it is it the the logical conclusion of analyzing clinical trials that are methodologically weak whether in psychiatry or any other field of medicine. That weakness is in measurement on the front end and measurement over the course of treatment. You cannot hope to show a robust treatment effect across heterogeneous groups of people who are really not that ill, may not be adherent to treatment, and may be professional volunteers.

      I sent Dr. Ioannidis an e-mail recently on when the anecdotal becomes statistical. Any clinical psychiatrist who has been working as long as I have has easily seen number of people across all diagnostic categories that exceed the numbers in most large clinical trials. I expect that any psychiatrist with that experience who is still working is doing so because of the fact that they are effective and these days there is a lot of concrete data to back that up. I am interested in that response and how an epidemiologist/statistician would analyze the problem from that perspective rather than just looking at clinical trials with severe limitations.

      So far no response but I am hopeful. The idea that clinical psychiatry is accurately characterized by clinical trials is pure evidence based-fantasy as far as I can tell. If the methods don't resemble clinical practice and the evidence based sites like Cochrane have the same boiler plate about study limitations and the need for more trials - that is an exercise in futility.

      If my results were that bad, I would have quit and gone into a different specialty a long time ago.

      Delete
  3. Im not sure how many people do read these, but I love your blog! Its very informative!

    ReplyDelete