Showing posts with label Ioannidis. Show all posts
Showing posts with label Ioannidis. Show all posts

Sunday, September 20, 2015

Ioannidis - Why His Landmark Paper Will Stand The Test of Time















John P. A. Ioannidis came out with an essay in 2005 that is a landmark of sorts.  In it he discussed the concern that most published research is false and the reasons behind that observation (1).  That led to some responses in the same publication about how false research findings could be minimized or in some cases accepted (2-4).  Anyone trained in medicine should not find these observations to be surprising.  In the nearly 30 years since I have been in medical school - findings come and findings go.  Interestingly that was a theory I first heard from a biochemistry professor who was charged with organizing all of the medical students into discussion seminars where we would critique research at the time from a broad spectrum of journals.  His final advice to every class was to make sure that you kept reading the New England Journal of Medicine for that reason.  Many people have an inaccurate view of science, especially as it applies to medicine.  They think that science is supposed to be true and that it is a belief system.  In fact science is a process, and initial theories are supposed to be the subject of debate and replication.  If you look closely in the discussion of any paper that looks at correlative research, you will invariably find the researchers saying that their research is suggestive and that it needs further replication.  In the short time I have been writing this blog asthma treatments, the Swan Ganz catheter, and the diagnosis and treatment of acute bronchitis and acute chronic obstructive pulmonary disease are all clear examples of how theories and research about the old standard of care necessarily change over time.  It is becoming increasingly obvious that reproducible research is in short supply.

Ioannidis provided six corollaries with his original paper.  The first 4 regarding power, effect size, the greater the number of relationships tested, and the greater the design flexibility are all relatively straightforward.  The last two corollaries are more focused on subjectivity and are less accessible.  I think it is common when reading research to look at the technical aspects of the paper and all of the statistics involved and forget about the human side of the equation.  From the paper, his 5th Corollary follows:

"Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias, u.  Conflicts of interest are very common in biomedical research [26], and typically they are inadequately and sparsely reported [26,27].  Prejudice may not necessarily have financial roots.  Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings.  Many otherwise seemingly independent, university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure.  Such non-financial conflicts may also lead to distorted reported results and interpretations.  Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable [28]"  all from Reference 1.

The typical conflict of interest arguments that are seen in medicine have to do with financial conflict of interest.  If the current reporting database is to be believed they may be considerable.  A commentary from Nature earlier this month (5) speaks to the non-financial side of conflicts of interest.  The primary focus is on reproducibility as a marker of quality research.  They cite the facts that 2/3 of members of the American Society for Cell Biology were unable to reproduce published results and that pharmaceutical researchers were able to reproduce the results from 1/4 or fewer high profile papers.  They cite this as the burden of irreproducible research.  They touch on what scientific journals have done to counter some of these biases, basically checklists of good design and more statisticians on staff.  That may be the case for Science and Nature but what about the raft of online open access journals who not only have a less rigorous review process but in some cases require the authors to suggest their own reviewers?  A central piece of the Nature article was a survey of 140 trainees at the MD Anderson Cancer Center in Houston, Texas.  Nearly 50% of the trainees endorsed mentors requiring trainees to have a high impact paper before moving on.  Another 30% felt pressured to support their mentors hypothesis even when the data did not support it and about 18% felt pressured to publish uncertain findings.  The authors suggest that the home institutions are where the problem lies since that is where the incentive for this behavior originates.  They say that the institutions themselves benefit from the perverse incentives that lead to researchers to accumulate markers of scientific achievement rather than high quality reproducible work.  They want the institutions to take corrective steps toward research that is more highly reproducible.

One area of bias that Ioannidis and the Nature commentators are light on is the political biases that seem to preferentially affect psychiatry.  If reputable scientists are affected by the many factors previously described how might a pre-existing bias against psychiatry, various personal vendettas, a clear lack of expertise and scholarship, and a strong financial incentive in marshaling and selling to the antipsychiatry throng work out?  Even if there is a legitimate critic in that group - how would you tell?  And even more significantly why is it that no matter what the underlying factors - it seems that conspiracy theories are the inevitable explanations rather than any real scientific dispute?  Apart from journalists, I can think of no group of people who are more committed to their own findings or the theory that monolithic psychiatry is the common evil creating all of these problems than the morally indignant critics who like to tell us what is wrong with our discipline.  Knowing their positions and in many cases - over the top public statements why would we expect  them sifting through thousands of documents to produce a result other than the one they would like to see?  

I hope that there are medical scientists out there who can move past the checklists suggested to control bias and the institutional controls.  I know that this is an oversimplification and that many can.  Part of the problem in medicine and psychiatry is that there are very few people who can play in the big leagues.  I freely admit that I am not one of them.  I am a lower tier teacher of what the big leaguers do at best.  But I do know the problem with clinical trials is a lack of precision.  Part of that is due to some of Ioannidis' explanation, but in medicine and psychiatry a lot has to do with measurement error.  Measuring syndromes by very approximate means or collapsing some of the measurements into gross categories that may more easily demonstrate an effect may be a way to get regulatory approval from the FDA, but it is not a way to do good science or produce reproducible results. 


George Dawson, MD, DFAPA




References:  

1:  Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124

2:  Moonesinghe R, Khoury MJ, Janssens ACJW (2007)  Most Published Research Findings Are False—But a Little Replication Goes a Long Way. PLoS Med 4(2): e28. doi:10.1371/journal.pmed.0040028

3:  Djulbegovic B, Hozo I (2007)  When Should Potentially False Research Findings Be Considered Acceptable? PLoS Med 4(2): e26. doi:10.1371/journal.pmed.0040026

4:  The PLoS Medicine Editors (2005) Minimizing Mistakes and Embracing Uncertainty. PLoS Med 2(8): e272. doi:10.1371/journal.pmed.0020272

5:  Begley CG, Buchan AM, Dirnagl U. Robust research: Institutions must do theirpart for reproducibility. Nature. 2015 Sep 3;525(7567):25-7. doi: 10.1038/525025a. PubMed PMID: 26333454.


Monday, March 17, 2014

Turning the United States Into Radioactive Dust

I don't know if you noticed, but it appears that the post cold war era is over.  The Putin appointed head of a Russian news agency Dmitry Kiselyov went on Russian television this morning and stated that Russia is "the only country in the world capable of turning the USA into radioactive dust."  In case anyone wanted to dismiss that as being short of a threat, he went on to say the President Obama's hair was turning gray because he was worried about Russia's nuclear arsenal.  We have not heard that kind of serious rhetoric since the actual Cold War.  As a survivor of the Cold War, I went back and looked at what time period it ran for and although it is apparently controversial the dates 1947 to 1991 are commonly cited.  I can remember writing a paper in middle school on the doctrine of mutually assured destruction as the driving force behind the Cold War.  In the time I have thought about it since, some of the cool heads that prevented nuclear war were in the military and in many if not most cases Russian.  We probably need to hope that they are still out there rather than an irresponsible broadcaster who may not realize that if the US is dust, irrespective of what happens to Russia as a result of weapons, the planet will be unlivable.

I am by nature a survivalist of sorts.  And when I detect the Cold War heating up again I start to plan for the worst.  The survivalist credo is that we are all 9 meals away from total chaos.  So I start to think about how much food, water, and medicines I will have to stockpile.  What king of power generation system will I need?  What about heating, ventilation and air filtration?  And what about access?  There are currently condominiums being sold in old hardened missile silos, but what are the odds that you will be able to travel hundreds of miles after a nuclear attack?  If you are close to the explosion there will be fallout and the EMP burst will probably knock out the ignition of your vehicle unless you have the foresight and resources to store it inside a Faraday cage every night.  There is also the question of what happens to the psychology of your fellow survivors.  In the post apocalyptic book The Road - a man and his son are surviving in the bleakest of circumstances on the road.  We learn through a series of flashbacks that their wife and mother could not adapt to the survivalist atmosphere and ended her life.  In one scene, they meet an old man on the road and the man gets into the following exchange with him after the old man says he knew the apocalyptic event was coming.  It captures the paradox of being a survivalist (pp 168-169):

Man:  "Did you try to get ready for it?"
Old Man:  "No.  What would you do?"
Man:  "I don't know"
Old Man:  "People always getting ready for tomorrow.  I didn't believe in that.  Tomorrow wasn't getting ready for them.  It didn't even know they were there."
Man:  "I guess not."
Old Man:  "Even if you knew what to do you wouldn't know what to do.  You wouldn't know if you wanted to do it or not.  Suppose your were the last one left?  Suppose you did that to yourself?"

By my own informal polling there are very few people who want to unconditionally survive - either a man-made or natural disaster.  Many have told me that they could not stand to be in their basement for more than a few hours, much less days or months or years.

For the purpose of this post, I want to hone in on the rhetoric or more specifically the threats.  I have had previous posts on this blog that look at how this rhetoric flows from the history of warfare and dates back to a typical situation with primitive man.  In those days, the goal of warfare was the annihilation of your neighbors.  In many cases, the precipitants were trivial like the theft of a small number of livestock or liaisons between men and women of opposing tribes.  In tribes of small numbers of people, even when there were survivors if enough were killed it could mean the extinction of a certain people.  Primitive man seemed to think: "My adversaries are gone and the problem is solved."

Over time, the fighting was given to professional soldiers and it seemed more formalized.  There were still millions of civilian casualties.  I think at least part of the extreme rhetoric of Kielyov is rooted in that dynamic.  Many will say that is is propaganda or statements being made for political advantage and in this case there are the possible factors of nationalism  or just anger at the US for some primitive rhetoric of its own.  But I do not think that a statement like this can be dismissed without merit.  There were for example two incidents where Russian military officers exercised a degree of restraint that in all probability prevented a nuclear war.  In one of those cases the officer was penalized for exercising restraint even though he probably avoided a full scale nuclear war.  In both cases the officers looked into the abyss and realized that they did not want to be responsible for the end of civilization as we know it.

I don't think extreme rhetoric is limited to international politics.  It certainly happens with every form of intolerance at one point or another if that intolerance is rooted in race, religions or sexual preference.  That is especially true if there are physical threats and physical aggression.  Intolerant rhetoric can also occur at a more symbolic level.  We have seen extreme rhetoric on psychiatry blogs recently.  Rather than the annihilation of the United States, the posters would prefer the annihilation of psychiatry.  I would say it is a symbolic annihilation but it is clear that many of them want more than that.  It still flows from the sense of loyalty to tribe, the need to annihilate the opponents, the necessary rigid intolerance and the resulting distortion of rational thought.  Certainly self serving bias exists to some extent in everyone, and it may not be that apparent to the biased person.  It took Ioannidis to open everyone's eyes to that fact in the more rational scientific world.  It can serve a purpose in science where the active process often requires a vigorous dialogue and debate.  Sometimes people mistake science for the truth when science is a process.  In order for that dialogue and debate to occur in an academic field there has to be a basic level of scholarship in the area being debated.  Without it there is a digression to tribal annihilation dynamics and complete intolerance.  That is counterproductive and negates any legitimate points that the proponents might otherwise have.

In science, the risks are lower.  At the minimum it adds nothing to the scientific debate.  An irrational bias with no basis in reality is the most primitive level of analysis.  In the 21st century, nobody needs to be annihilated in reality or at the symbolic level.

George Dawson, MD, DFAPA

Cormac McCarthy.  The Road.  Vintage Books.  New York, 2006.