Showing posts with label clinical trials. Show all posts
Showing posts with label clinical trials. Show all posts

Friday, June 27, 2025

Clinical Trials Versus Clinical Treatment – Key Differences




This is always an ongoing source of debate in psychiatry – largely because of the forces that seek to delegitimize the field.  This post is not directed at them since they have a long track record of producing the same biased arguments.  This is my perspective as a person who has successfully treated people with severe psychiatric problems and been an investigator in clinical trials. I have first hand experience with the limitations of clinical trials and they can be considerable.  I am certainly not unique in that regard and it is important to know why those of us with that experience can dismiss headlines about psychiatric treatments not working as well as many of the associated reasons.

Let me preface these comments by saying that running a clinical trial at whatever level you are responsible for is hard work. I have been a co-principal investigator but most of my work was focused on research subject follow-ups and patient safety. I was the guy who got the call about unexpected side effects, lab reports, ECGs, and imaging studies. In those cases, it was my job to decide if the problem was due to the study drug (or placebo) or an intercurrent illness and decide if it was safe for the research subject to proceed.  If the decision was to stop the protocol, there is always at least one person who upset with that decision. There is a study coordinator who wants to maximize retention and study completers. There are the research subjects and their families who may consider the research drug the latest hope – even though they do not know if they are getting placebo. There are other investigators and research assistants who all want to maximize research participation.

Let me start by framing the main differences between research and clinical treatment starting at the top of the diagram.  Clinical trials are sampling the general population on a very specific feature.  In medicine that is typically a disease state. That selection process is modified to be even more stringent to eliminate other variables.  In the diagram this is indicated by the inclusion and exclusion selection criteria. That does not exist in clinical practice where any number of complicating factors need to be addressed in addition to the main indication for treatment. 

This recruitment phase in clinical trials is critical because there is limited funding, and adequate sample size must be recruited, and therefore there is an acceptable time frame. 70% of clinical trials do not recruit the number of subjects they need in the expected time frame.  Currently in the US, less than 3% of adult cancer patients participate in clinical trials (ref 1 p. 268).  In older research it was often stated that only 5% of eligible subjects participate in research. In my experience that number is accurate.  In some of my studies it approached 2% at times.  In other areas the numbers are higher but they typically max out at about 25% and there is a significant difference between experimental trials and retrospective or epidemiological trials that are less demanding.

The ratio of enrolled/screened is important because it suggests a skewed sample from the population that might be more evident if the reasons for declining research were explored.

Response to the recruitment problem is typically that it is taken care of by randomization – but is that accurate?  For example, if people do not want to take a chance on a new medication or randomization does that eliminate potentially good treatment responders from both arms of the trial?  If a trial is debated in the press – like many psychiatric trials are - will that increase the nocebo response in both arms? If a trial involves discontinuation or substitution of a current medication that had been somewhat effective does that lead to increased problems with nocebo effects and drop puts.  The impact on the physician-patient relationship can be affected by both offering and not offering trial participation.  Physicians are most likely not to suggest that stable well treated patients participate for altruistic purposes.  Recruiting pressure on the clinical trial team may lead them to accept suboptimal research subjects with a history of participation in multiple trials, subjects with milder illness or equal comorbidity (e.g. anxiety = depression), or subjects who are more likely to drop out.

These are just a few of the factors that can affect the recruitment and enrollment phase that are not very well investigated or recorded. Speaking from experience, I can say that what looks like an acceptable treatment protocol is often modified as investigators start to panic about low enrollment in the trial.

On the clinical side there are different problems. No matter how complex the patient’s situation there are typically waiting lists of patients to be seen by psychiatrists.  This is true even though only about 6% of the medical workforce that prescribes psychotropic medications are psychiatrists. There are no exclusion criteria and rather than the potential trial participant effect where trial participants are likely to do better even if the medication is ineffective (ref 1, p. 258) – patients referred to see psychiatrists are generally less healthy, more likely to have chronicity, less likely to adhere to treatment, more likely to have a substance use disorder, and more likely to have medical and psychiatric comorbidity.

In addition to the placebo and nocebo effects there are a number of lesser-known factors affecting the experience of people in clinical trials and clinical care.  Some of them will alter the entire trial sample.   For example, a recruitment bias due to failing enrollment can result in equivocal cases being enrolled (for example an adjustment disorder with depressed mood rather than major depression in a depression trial). These subjects are more likely to appear to respond to nonspecific factors.  Many people diagnosed with depression have a long history primary insomnia and may appear recovered if their insomnia is treated by the antidepressant or adjunct medications allowed in some protocols. I have participated in some protocols where all of the investigators were instructed to avoid any semblance of psychotherapy and not enroll anyone who might have a personality disorder.  These are some of the occurrences behind the scenes that can affect clinical trials.

The number of times a person volunteers for clinical research can be an issue. There is very little written about this, but there are some centers that specialize in research and often recruit healthy patients for early drug trials and compensate them. There are some online sites that say you can volunteer for as many studies as you qualify for in any given year.  There are obvious concerns about whether subjects need a break between trials as well as deception involved in gaining access to the trial, adhering to the protocol, and skewing the trial results (2-4).  I also have the concern that it is easy to fake some psychiatric disorders to get into a paid trial. The trials I have been in as an investigator were all through academic centers where reimbursement was primarily to cover travel expenses.  The professional patient role was probably less in those circumstances but it is an issue that is only covered in the research literature on a sporadic basis.

With the number of variables listed in the diagram – it takes a lot of thought to figure out the impact on the trial and clinical care.  It is useful to keep in mind a few overall points. First, there is a lot more going on in the trial than an effect based only on the biological effects of a medication (placebo, nocebo).  Second, clinical care is not like a clinical trial.  If someone is very ill and they are not responding to a medication or having side effects – a change needs to occur very quickly.  That is typically the same day on an inpatient unit or a day or two later in the outpatient clinic.  In a clinical trial, the subject can drop out or discuss other options with the investigator but those options are very limited and slow.  Major side effects typically need to be reported to the Human Subjects Board monitoring the research protocol, the protocol sponsor, and in some cases the FDA. Third, clinical care by its very nature incorporates many of the nonspecific factors associated with a nondrug positive response usually termed a placebo response. These same nonspecific factors can occur in psychotherapy trials as well as complementary medicine trials. 

Another important aspect of clinical trials is the measurement issue.  Various rating scales are typically employed as representations of psychiatric diagnoses. There are standardized and valid scales to measure adverse drug effects and, in some cases, more specific symptoms like sleep.  Global ratings by the investigators are added as an additional measure of validity.  These are all crude measures and unless subjects are selected specifically for a phasic severe disorder – the ratings will fluctuate significantly over the course of the trial based on various factors. In addition to the psychological, environmental, and biological factors some authors in medicine and biology have started to use the Heisenberg effect to describe this uncertainty. Since these measurements are not of discrete entities and cannot be represented by wave functions – this term is used at the level of a crude analogy for uncertainty in biological measurement.

Clinical care is much different.  There has only been a pretense of measurement.  There has been some publication of measurement-based psychiatry as an offshoot of evidence-based medicine.  I don’t think it offers much of an improvement over clinical psychiatry.  In psychiatry like all of medicine we are taught to recognize and respond to patterns not rating scales. I was reminded just this week of a referral I got of a patient who was described as having a histrionic personality disorder based on DSM criteria and some additional psychometric testing. It was apparent to me with a few minutes that the patient was manic and the correct diagnosis was bipolar disorder.  Psychiatrists are taught by seeing real patients, recognizing their problems, and organizing a treatment plan around those patterns. Referring to rating lists based on DSM symptoms that are in turn based on patterns in real people leaves a lot of room for interpretation.

The treatment aspect is also a critical difference.  There are few people who would want to receive treatment the way it is set up in clinical trials. People with any degree of a problem would not get adequate treatment. Most clinical trials are set up to see if there is an adequate efficacy and safety signal for the FDA to allow the medication on the market.  Post marketing surveillance is used to detect rare but severe side effects that might result in a new medication being withdrawn from the market.   

Clinical treatment is timely and effective in psychiatry.  Like the rest of medicine, it is not perfect and a lot can go wrong.  Based on my  experience as a researcher, a quality assurance reviewer, a teacher, and a colleague in large departments most people can count on finding a good if not excellent psychiatrist to provide effective treatment.  

 

 

George Dawson, MD, DFAPA

 

References:

1:  Piantadosi S.  Clinical Trials: A Methodological Perspective. 3rd ed.  Hoboken, NJ: John Wiley and Sons, Inc, 2017: 886 pp.

2:  Shiovitz TM, Zarrow ME, Shiovitz AM, Bystritsky AM. Failure rate and "professional subjects" in clinical trials of major depressive disorder. J Clin Psychiatry. 2011 Sep;72(9):1284; author reply 1284-5. doi: 10.4088/JCP.11lr07229. PMID: 21951988.

3:  Pavletic A, Pao M. Safety, Science, or Both? Deceptive Healthy Volunteers: Psychiatric Conditions Uncovered by Objective Methods of Screening. Psychosomatics. 2017 Nov-Dec;58(6):657-663. doi: 10.1016/j.psym.2017.05.001. Epub 2017 May 9. PMID: 28651795; PMCID: PMC5680154.

4:  Lee CP, Holmes T, Neri E, Kushida CA. Deception in clinical trials and its impact on recruitment and adherence of study participants. Contemp Clin Trials. 2018 Sep;72:146-157. doi: 10.1016/j.cct.2018.08.002. Epub 2018 Aug 21. PMID: 30138717; PMCID: PMC6203693.

5:  Quitkin FM, Rabkin JG, Gerald J, Davis JM, Klein DF. Validity of clinical trials of antidepressants. Am J Psychiatry. 2000 Mar;157(3):327-37. doi: 10.1176/appi.ajp.157.3.327. PMID: 10698806.

6:  Atkins PW.  Physical Chemistry. 5th ed.  Oxford: Oxford University Press. 1994:  380-387.

 The issue of superposition is conflated with the uncertainty involved in biological measurement. It is the described as an analogy rather than the definition of superposition from quantum mechanics.  

Workforce Reference:  40,000 psychiatrists/290,000 nurse practitioners + 209,000 primary care physicians + 115,000 physician assistants – certified = 40,000/654,000 = 0.06 or 6%. 


Graphics:  I produced the above graphic with Visio.  Click to enlarge. 

Saturday, December 30, 2017

The Nocebo Effect In Antidepressant Trials



Early in my career I was an assistant to principle investigators who were doing clinical trials of drugs to treat generalized anxiety disorder, major depression, schizophrenia, and Alzheimer's Disease.  My job was top screen patients, do medical histories and physical examination, and follow them throughout the research protocol.  That is not an easy job.  The difficult part is making sure that the research patients are not getting any medical complications from a new and experimental drug in addition to assessing any possible therapeutic effects.

One of the main complications was the nocebo effect.  With all of the hype about placebo effects with antidepressants, I have always found it curious that few mention the nocebo and how it affects clinical research.  In those early days it was possible to break the blind and if research subjects withdrew from the protocol to let them know if they were taking active drug or placebo.   An adverse reaction to placebo is the nocebo effect.  It was common in these protocols for research subjects to either stop the protocol due to adverse effects or describe severe side effects while they were taking placebo.

One of the main problems I encountered with this effect in clinical trials that is an even bigger problem today is loss of research subjects who terminated early due to nocebo effects.  In the example I gave the patients developed significant side effects in response to a placebo, but people can also develop dramatic side effects to the active research medication.  Whether they are more likely to get the effect from active drug or an active placebo over the active study drug has not been actively studied.  These are critical questions because FDA trials now require and Intent To Treat Analysis for clinical trials.  That means all of the research subjects who did not complete the protocol for whatever reason need to be included in the analysis.  That becomes a problem when any subject has not been treated with the active drug - both in terms of true response but also side effects reporting.  All FDA package inserts contain detailed information on medication side effects.  Today in many cases there is a side effect comparison between active drug and placebo. Nocebo related data can skew the side effect data reported in the package inserts.    

I always thought that antidepressant trials were the ideal setting to study nocebo effects and came across a paper two years ago that does that (1).  That time frame over the last two years was an interesting one.  We saw all of the speculations about the release of the DSM-5 in the summer of 2015.  Much of that speculation involved sensational articles in the media suggesting that antidepressant medications did not work or ate least were no better than the placebo effect. Further speculation suggested that pharmaceutical manufacturers and the American Psychiatric Association (via the DSM-5) had an interest in broadening the market for antidepressants and increasing their use. At no point did any of the critics suggest that their may be a serious problem with clinical trials based on the nocebo effect.  And most interestingly, many of the critics in non-professional forum complained mostly about the side effects of these medications.  In some cases they described these medications as very dangerous.

The paper in reference 1 looks only at people receiving placebo in a pooled sample of placebo-arm data from 20 industry-sponsored multi-site randomized clinical trials of antidepressant medication in the treatment of acute major depression.  Adverse event data was recorded for all 20 clinical trials.  Adverse events (AE) can be pre-existing nonspecific somatic symptoms.  Treatment emergent adverse events (TEAE) were events that occurred or worsened during placebo treatment.  The adverse events were obtained by open ended questioning and in clinical trials from that period (1993-2010) were listed on standard forms.  Five endpoints were studied including any AE, any treatment related AE, any severe AE, any serious AE resulting in discontinuation from the study, and discontinuation form the study for any reason.  Worsening of clinical symptoms was also measured and defined as any increase in the depression rating score on any scale used for that particular study.

The main results included:

1.  TEAEs from placebo were reported in 1,569/2,457 or 63.9% of the study participants.

2.  11.2% (274) of the participants had worsening depression rating scores during the placebo treatment.

3.  4.7% (115) of the participants discontinued the study due to an AE during the placebo treatment.     

Just considering those results and nothing more illustrates the nocebo problem with current state-of-the-art clinical trials technology.  If 63.9% of the placebo-treated subjects experience side effects - what does that translate to in terms of subjects taking the active drug getting side effects unrelated to that drug?  Although untreated depression might be expected to worsen - how much of that is nocebo modulated and how much crosses over to the active drug.  A significant number of people dropped out of the trial due to the nocebo effect.  The main results suggest that in randomized placebo controlled trials of efficacy and safety - the nocebo effect is substantial.  Since the observed TEAEs and percentage of research subjects worsening on placebo was substantial - extension to TEAEs, worsening, and dropping out in clinical practice could be expected.  That would be a very difficult problem to research due to a lack of standardization across practices.

The authors did not stop with those results.  They looked at additional variable to see if there was any evidence to confirm either of the two major hypotheses about the nocebo effect.  The first is a conditioning hypothesis.  In this effect, prior exposure results in the expectation of an associated effect.  They cite as an example, women receiving chemotherapy for breast cancer and how an associated stimulus led to more nausea in the conditioned group (4).  In this case they looked at previous treatment with antidepressants as a possible conditioning effect and did not find any significant associations.

The other major hypothesis regard the nocebo effect is an expectation hypothesis.  They cite an example where a sample of college students were given inert placebo and told that it was an herbal supplement for cognitive enhancement (5).  They were provided with a fictitious list of potential benefits and side effects.  Symptoms were endorsed by most students but the students who believed that they received the active supplement endorsed more symptoms suggesting that their expectation of supplement and effect affected their perception.  They found some support that previous treatment with Hypericum perforatum (St. John's Wort) may have been associated with a greater likelihood of reporting TEAEs and suggest that users of complementary medications may be more suspect of medical pharmacotherapy.

The paper is a fairly concise review of demographic and neurobiological factors associated with the nocebo effect.  On the neurobiological side they cite the work of Enck, et al (6).  Elman and Borsook (7) have an interesting paper that looks at addiction, pain, and several common substrates of the placebo and nocebo effects.             

The take home message for me is what I have known for over a decade at this point. Current clinical trials technology in psychiatry and medicine is general is very primitive.  The evidence-based movement has had its day if this is the kind of evidence they are considering.  Why would anyone expect much more than a mild to moderate effect from a medication used for heterogeneous disorders when the true effect of the medication is significantly affected by two factors that are never measured?  This suggests areas for improvements in clinical trials that could render much of what has been collected so far - obsolete.

A final observation comes to mind and that is a phenomenon that I wrote about on this blog many years ago.  People with little to no expertise in psychiatry or medicine don't hesitate to criticize the field.  A good example would be people who have never treated a single case of depression much less thousands of cases who do a meta-analysis and claim that antidepressants are placebos.  I don't recall any of these critics in the popular press considering the limitations of clinical trials.  Without that basic consideration any theory that fits the apparent data can apply.

What would be useful at this point would be detailed analyses of the placebo arm of all clinical trials and a discussion of how these effects might bias the data.  Actual interventions to quantify or eliminate these effects in future trials is the next step.  These are more practical steps than hoping for mythical large clinical trials with slightly more sophisticated clinical trials technology that epidemiologists and the Cochrane Collaboration routinely recommend.         


George Dawson, MD, DFAPA







References:

1: Dodd S, Schacht A, Kelin K, Dueñas H, Reed VA, Williams LJ, Quirk FH, Malhi GS, Berk M. Nocebo effects in the treatment of major depression: results from an individual study participant-level meta-analysis of the placebo arm of duloxetine clinical trials. J Clin Psychiatry. 2015 Jun;76(6):702-11. doi: 10.4088/JCP.13r08858. PubMed PMID: 26132671.

2:  Gupta SK. Intention-to-treat concept: A review. Perspectives in Clinical Research. 2011;2(3):109-112. doi:10.4103/2229-3485.83221.

3:   Interactions between brain and spinal cord mediate value effects in nocebo hyperalgesia” by A. Tinnermann, S. Geuter, C. Sprenger, J. Finsterbusch, C. Büchel in Science. Published online October 5 2017 doi:10.1126/science.aan122

4:  Bovbjerg DH, Redd WH, Jacobsen PB, Manne SL, Taylor KL, Surbone A, Crown JP,Norton L, Gilewski TA, Hudis CA, et al. An experimental analysis of classically conditioned nausea during cancer chemotherapy. Psychosom Med. 1992 Nov-Dec;54(6):623-37. PubMed PMID: 1454956.

5: Link J, Haggard R, Kelly K, Forrer D. Placebo/nocebo symptom reporting in asham herbal supplement trial. Eval Health Prof. 2006 Dec;29(4):394-406. PubMed PMID: 17102062.

6: Enck P, Benedetti F, Schedlowski M. New insights into the placebo and nocebo responses. Neuron. 2008 Jul 31;59(2):195-206. doi: 10.1016/j.neuron.2008.06.030. Review. PubMed PMID: 18667148.

7: Elman I, Borsook D. Common Brain Mechanisms of Chronic Pain and Addiction. Neuron. 2016 Jan 6;89(1):11-36. doi: 10.1016/j.neuron.2015.11.027. Review. PubMed PMID: 26748087.



Attributions:

Figure at the top is stock photo from Shutterstock per their licensing agreement by kasezo entitled:

Stock illustration ID: 284558927   conceptual 3d design of false pill.( placebo and nocebo effect.red and green colored version)

Sunday, November 19, 2017

What Are The Implications Of The Suboxone Versus Vivitrol Study For Treating Opioid Use Disorder?




A major study came out in the Lancet last week that was a head-to-head comparison of  Suboxone (buprenorphine-naloxone or BUP-NX) and Vivitrol (extended-release naltrexone or XR-NTX).  I am beginning with the product names here because they were the actual medications used in the study and nobody uses the generic names at this point other than physicians.  This is an important study for a couple or reasons.  The first is that oral naltrexone tablets have already been tried for the treatment of opioid use disorder (OUD) and that approach failed.  XR-NTX used in this study is a long acting intramuscular injection that is given every 28 days.  The second is that many people with OUD do not want to take BUP-NX for many reasons.  They may be philosophically opposed.  They may have the experience that they know they will relapse on it, using heroin and then covering heroin withdrawal with BUP-NX.  They may not be able to tolerate the medication either because of side effects or the possibility of cognitive side effects.  The cognitive set of the patient is also important in the decision.  It is common to find patients who benefit from XR-NTX because using the medication makes heroin ineffective and therefore using it is a waste of money.

The study design is relatively straightforward.  This is a 24 week open-label randomized trial comparing BUP-NX to XR-NTX.  There is no placebo arm and I hope that at this point there are no human subjects committees suggesting that there should be.  OUD is just too dangerous to be considering a placebo group.  The protocols for starting treatment with either medication make blinding impossible. Eight study sites of the National Drug Abuse Clinical Trials Network (CTN) were used.  One of the non-uniform aspects of this trial was that the detox protocols varied by site:

1:  Two sites used no opioids, but used clonidine or "comfort meds" a term that I really don't like to see. Other comfort meds typically include an NSAID like naproxen for muscle and joint pain, hydroxyzine for anxiety and insomnia, methocarbamol for muscle spasm, and dicyclomine for abdominal cramping.

2:  Four sites used 3-5 day methadone tapers.

3:  Two sites used 3-14 day buprenorphine tapers. 

If a subject was going on to the XR-NTX group they had to be off all opioids for three days, have negative toxicology for the presence of opioids, and have a negative naloxone challenge test.  The authors don't explicitly state this but all of these detox protocols favor BUP-NX in the induction phase or initial dosing toward maintenance.  That is basically because most moderate to heavy users of heroin will be experiencing withdrawal symptoms at the end of these protocols.

Random assignment of 283 subjects to the XR-NTX group and 287 subjects to the BUP-NX group occurred.  Early termination occurred for a number of reasons in 78 of the XR-NTX group and 62 of the BUP-NX group.  A total of 283 and 287 subjects respectively were assigned in the final intent to treat analysis.

The primary outcome variable was time to relapse.  Relapse was defined as self report of use and either provided positive urine toxicology for any non-study opioid or failed to provide a urine sample.  The subjects were seen weekly for monitoring of cravings, self reported use, reports of adverse events and report of other substance use.  Standard physician or nurse led office based medication management was described as happening at these visits.  It is not clear to me what that is but they described a standard medication focused visit.  Psychosocial counseling was recommended and available but it was not a variable for this research. 

Secondary outcome variables included portion of subjects getting through the induction phase and into the active study, adverse events (including overdoses), frequency of non-opioid study use, and opioid cravings (rated on a 0-100 visual analogue scale).

In terms of results, they were broken down across several variables.  The intent-to-treat analysis showed that relapse-free survival was 8.4 weeks in the XR-NTX group and 14.4 weeks in the BUP-NX group but 20.4 weeks in the XR-NTX group and 15.2 weeks in the BUP-NX group when the protocol group rather than treatment intent was used.  The difference in these results was due to induction (starting of either medication at the end of detox) failures in the XR-NTX group.  The rates of successful XR-NTX induction varied site from 95% at an extended stay opioid free program to 52% at the methadone detox programs.  Self reported opioid abstinent parallels these results.  The graphical representations of these data (survival curves) show essentially parallel curves after an initial drop due to differences in the induction protocol.  The authors conclude that the drugs are equally safe and effective in preventing opioid relapse.

A separate interesting survival curve was the rating of opioid cravings over time.  The authors interpretation of these curves was that that the BUP-NTX group had fewer cravings initially but that by 24 weeks the ratings converged.  There may be some additional data in that graph showing that the low point in cravings was reached about 5 weeks earlier in the BUP-NX group and therefore it persistent longer.

The other important secondary outcome measure was the number of overdose deaths.  If analyzed just by the protocol there were 10 overdose events in the XR-NTX group and 9 overdose events in the BUP-NX group.  Including the failed induction subjects in the intent-to-treat analysis increases these number to 18 and 10 respectively.  There were 2 fatal overdoses in the XR-NTX group and 3 fatal overdoses in the BUP-NX group. The fatal overdose group was due to failed induction and premature termination of treatment.

As a physician involved in the treatment of OUD the implications here are:

1.  BUP-NX and XR-NTX are equivalent treatments and should be recognized as such - there has been some press about XR-NTX not being an "evidence-based" treatment despite the fact that it has been in use for some time.  Those articles either ignore the fact that it had the FDA approved indication or they ridicule the study used to get that approval. Here is the additional evidence.

2.  There is a need for standardized opioid detox protocols that are optimized for patient safety and efficacy for treating withdrawal symptoms - the three options used in the treatment center in these trials are representative of what is available in the community.  One of the goals of detox is to optimize  the transition to medication assisted treatment (MAT) to prevent relapse to opioid use.  As the authors point out the lack of a smooth transition to XR-NTX was the main reason for treatment failures and poorer outcomes in that group in the intent-to-treat  analysis.

3.  Besides the detoxification protocol other resources to facilitate the transition from detox to MAT maintenance are unknown -  It is clear that transitioning the patient from detox to MAT is a critical step in the treatment process. That not only involves the medication but the structure of the program and individual patient support at that time.  People leave treatment for sustained and untreated withdrawal symptoms and that include severe psychiatric comorbidity including severe anxiety +/- panic attacks, insomnia that often involves days of no sleep and drenching night sweats, and depression.  There is often a lot of confusion over which symptoms are due to an associated psychiatric disorder and which symptoms are due to withdrawal.  The confusion can be heightened if the patient comes in being treated for anxiety, insomnia, or depression with a maintenance medication.  The current paper does not describe an optimal path for treating those patient characteristics (psychiatric disorders and other substance use disorders were an exclusion criteria).   

4.  Optimal patient selection for the BUP-NX versus XR-NTX are unknown - In additional to significant psychiatric symptoms there are a number of other factors that will influence patient selection not the least of which are cost and logistics.  In many parts of the country it is still extremely difficult to find a BUP-NX provider.  Even when a physician is found, many do not accept insurance and the out of pocket cost for patients for both the visits and associated lab tests is prohibitive.  XR-NTX is a very expensive injection that may not be covered by insurance companies or patient assistance programs.  This study may increase the likelihood of coverage despite the fact that XR-NTX has had an FDA approved indication for "the prevention of relapse to opioid dependence, following opioid detoxification" since 2010. 

5.  Clinicians should use this information to discuss realistic treatment with their patients - as I have previously pointed out BUP-NX is no panacea and neither is XR-NTX.  Contrary to the idea that antagonist therapy prevents overdoses, there was no significant differences in overdose deaths in this study.  That should lead to a very serious informed consent based discussion about these medications with patients.  The idea of how long the medication should be taken or whether it should be taken indefinitely should not be part of that initial discussion.  The focus needs to be on completing detox and transitioning onto one of these medications.  The patient's capacity to make a realistic decision and what their preferences are with regard to these medications are all part of that process.  Life is not a randomized clinical trial.  Part of the skill set of the physician is the ability to have these discussions. It takes more than the ability to prescribe these medications.

That's my take on the head-to-head comparison of Vivitrol (XR-NTX) and Suboxone (BUP-NX).  Even with effective treatments to prevent relapse to opioid use - many more elements need to be in place.  The practical issue most frequently discussed is the availability of prescribers. Nobody seems to be talking about the fact that some treatment programs offer neither option.  There is also very little discussion about the fact that some treatment programs lack the atmosphere or expertise to provide patients with a shot at being successful and getting off opioids. 

We have come a long way with agents to treat OUD  compared to the days when I would see hospitalized heroin addicts who wanted to stop but had no realistic options.  I could only offer the 3 days methadone detox, continuing their methadone maintenance dose, or covering the sympathetic symptoms of withdrawal with clonidine. I could tell them where the closest methadone maintenance program was but that did not assure them an appointment or a place in that program.  Federal Law at the time prohibited the active treatment of OUD unless you happened to be in a licensed methadone maintenance program. Now that the legal and regulatory landscape has improved - it is up to treatment programs everywhere to get up to speed and offer state of the art care.  It is up to state licensing agencies to not allow treatment centers to take care of these patients if they don't.


George Dawson, MD, DFAPA



References:


1:  Joshua D Lee, Edward V Nunes Jr, Patricia Novo, Ken Bachrach, Genie L Bailey, Snehal Bhatt, Sarah Farkas, Marc Fishman, Phoebe Gauthier, Candace C Hodgkins, Jacquie King, Robert Lindblad, David Liu, Abigail G Matthews, Jeanine May, K Michelle Peavy, Stephen Ross, Dagmar Salazar, Paul Schkolnik, Dikla Shmueli-Blumberg, Don Stablein, Geetha Subramaniam, John Rotrosen.  Comparative effectiveness of extended-release naltrexone versus buprenorphine-naloxone for opioid relapse prevention (X:BOT): a multicentre, open-label, randomised controlled trial.  The Lancet
Published: November 14, 2017.


       


Sunday, August 23, 2015

Evidence Based Urgent Care



I went in to urgent care today after battling an influenza-like illness that I got on a trip to Alaska, most likely in the flight home.  The symptoms are charted in the above graphic.  Without providing too much graphic detail on the symptoms, my concern was in whether or not I might have pneumonia and needed a chest x-ray.  Although I knew this was most likely not an influenza virus, the symptoms were fairly severe.  As an example, on last Saturday August 15, I had diffuse muscle pain that was so severe, I could barely move.  In the two days I took time off from work August 18 and 19, the muscle pain was restricted to chest wall muscles.  The cough had also become productive over the past 5 days.  I thought it was reasonable to get it checked out, especially against a backdrop of asthma and chronic asthma therapy.

I took my graphic along with me and showed it to the nurse and the physician.  I told her that I had bronchitis that was probably caused by a respiratory virus.  The nurse was overtly uninterested and at one point said that all she needed was a single symptom to write down and that symptom would be cough.  As she continued writing, she kept glancing at the graphic and taking additional notes.  I wanted to say: "Just scan it in and you can stop writing.  It contains almost all of the information that you need to know."  But I didn't.  I maintained standard medical office decorum.  As Seinfeld once said: "You go from the large waiting room to the smaller waiting room and wait again to see the doctor."  The nurse took all of my vitals including an oxygen saturation and stated matter-of-factly: "They're all normal."  My enthusiastic reply of "Good" was met with dead air.

The doctor walked in and I gave him a brief history.  He looked at my graphic and wrote down a few words.  He listened to all of my lung fields with a stethoscope and then listened to my heart sounds - both through my shirt.  The entire history and exam took about 5 - 10 minutes.  And then:

"You have bronchitis.  There is probably a lot of inflammation in there.  I am going to prescribe prednisone and an antibiotic.  Levaquin is a good one for this.."

At that point, I told him that I was already on two QTc interval prolonging drugs and that Levaquin might not be a good idea.

"OK then I will look up another antibiotic.  Doxycyline is one that should work.  Yes - there is no interaction between doxcycline and your medication.  Any other questions?"

I asked him about the issue of a chest x-ray.  I had three in the last two years and it seemed like the decision was a coin toss.

"I don't think so.  You have sounds all over your lungs and not in one place in particular.  If it doesn't get better I would do a chest x-ray.  Right now it is not going to change what I do."

I walked out with scripts for doxycycline 100 mg BID x 10 days and prednisone 40 mg QDAY x 5 days.  Entire length of the visit with the RN and MD about 15 minutes and I was the only patient in that clinic.

Of course all during this time, I was comparing the topology of this medical visit and medical care to the common uniformed criticisms of psychiatric care.  Just this morning and totally out of the blue somebody sent me a link to their letter in the British Medical Journal about the fact that 70% of clinical trials of paroxetine were unpublished.  He sent it in response to a post that I had made here some time ago, and apparently was unaware of the fact that I figured out that paroxetine was not a drug that I cared to prescribe by the time I had prescribed it to a second patient.  It should be obvious that unpublished clinical trials have been a significant problem in medicine for some time and that is nothing new in psychiatry.  Seems like the prevalent bias against psychiatry rearing its ugly head again.

How about the longstanding claim that psychiatric diagnoses are not valid because there is no "test" for them.  What was the "test" I got for bronchitis?  Of course there was none.  A diagnosis of bronchitis pretty much depends on the symptoms that I walked in with.  The same symptoms on the graphic that seemed to be shunned by the RN and casually interesting to the MD.  None of the measurements in the office had anything to do with bronchitis.  They were all essentially measures to look at whether or not I had any more significant disease - actually a more significant syndrome.  When I was an intern, we thought we had a more scientific way to analyze the problem.  We would obtain sputum samples and Gram stain the samples and culture them.  Once the integrity of the respiratory epithelium is disrupted there are all kinds of bacteria that colonize the area.  The sputum samples were not useful - either in terms of pathogenesis or guiding antibacterial therapy.  Thirty years later, antibacterial treatment of bronchitis is still empirical.  No specific pathogen is identified.  The thinking used to be that sputum indicated a bacterial infection, now we know it is just sloughed epithelium from the cytotoxic effect of viruses.  Empirical treatment of bronchitis is really no different than empirical treatment of any symptom defined mental illness.  Ignoring a couple hundred specific respiratory viruses is reminiscent of a hostile criticism of psychiatric nomenclature: "It is all one disease."  By comparison, acute bronchitis is also one disease.

Another interesting comparison is symptom severity.  I spend a lot of time discussing and documenting this with psychiatric disorders.  In the case of bronchitis, there was no particular interest in severity.  No questions about subjective experience, patterns of the cough, or sputum production.  You either have it or you don't.  Of course, I know that pattern recognition was in place and the physician was looking for signs of more significant illness like tachycardia, tachypnea, diaphoresis, and cyanosis.  But there were not any questions about functional capacity and how I was being affected (again more info in the graphic.)  Psychiatric diagnosis and treatment requires close attention to severity, impact on functional capacity and sleep, and whether the symptoms are in remission.

What about the "evidence basis" of the treatment?  A charitable interpretation of the e-mail about paroxetine would suggest that author was critical of the evidence basis for its use.  It is well known that over half of the drug studies from ClinicalTrials.gov are unpublished and that a significant number of the published trials omit details of interest (3) like side effects.  That same study looked at trials in 7 different medical specialties, none of them psychiatry.

It turns out that in clinical trials those adults with acute bronchitis treated with antibiotics are less likely to be rated as improved at follow up.  Some studies show a shorter duration of cough by 1/2 day but the trade off is a significant increase in antibiotic side effects with 19% of emergency department visits for adverse drug effects being due to antibiotics (1, 2).  A direct quote from UpToDate:

"Patients with known asthma may develop superimposed acute bronchitis.  It is common that such patients seek treatment and are inappropriately prescribed an antibiotic even though they usually have a viral illness."

The UpToDate review also looks at the associated issues of overprescription of antibiotics, the 20 year CDC initiative on antibiotic overprescribing that has essentially failed and the dire consequences of developing multiple antibiotic resistant bacterial strains.  My purpose here is not to imply anything about my treatment, but to illustrate that these practices are common and there is no equivalent amount of criticism similar to that targeted at psychiatric care.  In fact, if I wanted to take on the role of pseudopatient, I could walk in to any clinic or emergency department and walk out with the same prescriptions - even in the absence of acute bronchitis.  I could simply lie about the symptoms.  Nobody is going to ask me for a sputum sample, and 6/7 asthmatics have residual wheezing that can be picked up on a cursory exam.  Of course there would be public outcry.  I would be accused of lying to hard working physicians and wasting their time.  But that same poorly conceived idea is still cited as evidence against psychiatric diagnosis.

Unlike the unrealistic critics of psychiatry, my goal here is not to embarrass anyone, or illustrate that I am better than anyone.  But how is nonpublication of clinical trials of paroxetine (a drug that I have not prescribed in over 20 years) a problem with psychiatry?  Nonpublication of clinical trials is obviously a problem for everyone.  The poor quality of current clinical trials technology is a problem for everyone and unlike the Cochrane database, I don't see the point in the exhaustive documentation of predictable low quality results - at least not much of a point.

I am also not about to attribute the differences in practice and clinical trials to the art of medicine.  This is a problem of analyzing huge amounts of data in biological systems.  There are widespread problems with clinical trial design in every area of medicine because they cannot analyze that data.  Contrary to being a "gold standard" there needs to be better stratification of heterogenous diseases whether that is depression or bronchitis.  We can only have more specific treatments when we have better characterized molecular pathology and the treatments to target that pathology.  That includes markers that would suggest which patients would respond to drug treatment and which would not.  There is a promising biomarker for bronchitis that should be treated with antibiotics right now, but it is not widely studied or widely available.

The highlights of this post have really not changed since I began pointing out that psychiatry is singled out for criticism by various people with various motivations.  Looking at the facts in this post should leave little doubt that this is merely a continuation in this trend of unrealistic and unfair criticism consistent with the dynamic I outlined in the past.

Some things just don't change.


George Dawson, MD, DFAPA



References:

1:  Thomas M. File.  Acute bronchitis in adults. In: UpToDate, Post TW (Ed), UpToDate, Waltham, MA. (Accessed on August 23, 2015.)

2:  Smith SM, Fahey T, Smucny J, Becker LA. Antibiotics for acute bronchitis. Cochrane Database Syst Rev. 2014 Mar 1;3:CD000245. doi: 10.1002/14651858.CD000245.pub3. Review. PubMed PMID: 24585130.

3: Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P. Timing and completeness of trial results posted at ClinicalTrials.gov and published in journals. PLoS Med. 2013 Dec;10(12):e1001566; discussion e1001566. doi: 10.1371/journal.pmed.1001566. Epub 2013 Dec 3. PubMed PMID: 24311990



Supplementary:

Graphic updated daily for the course of the illness:







This illness finally cleared at about Midnight on August 28, after 16 days.  The "common cold" typically lasts 2 - 3 weeks and is a significant cause of morbidity in this country.  I hope that I have also illustrated that it is also a problem in terms of treatment and a lack of real public health measures to reduce the spread of these viruses.

Tuesday, May 12, 2015

APA's Feelgood Move of the Year?


























I noticed on another blog that there were expected praises and a few sarcastic comments for the American Psychiatric Association (APA) announcement that they had signed on with another 600 organizations to support the AllTrials initiative.  In case you are unaware of this initiative, Google the name and it will bring you to the web site.  The web site will give you more than enough information on why transparency and enforcement of pharmaceutical company behavior is important and why you should sign the petition.  It will even provide you with a section on "Myths and Objections"  to dispel any concerns that you might have about - well, myths and objections.  It really did not address my concerns about the fact that clinical trials technology as we know it is incredibly crude, they are practically all short term, and they have little to do with how the medication is used in clinical practice.  To me it seems like an exercise to try to keep the pharmaceutical companies honest.  If it also succeeded in keeping the useless meta-analyses of flawed studies out of the literature and prevented the same people who produce those studies from drawing even more flawed conclusions about psychiatry, I would be all for it.  But I doubt that is going to happen.  I also doubt that it will have any effect on the drugs marketed in the USA.  It should be fairly clear from observing FDA behavior that their decisions aren't based on good studies or even the reviews done by their own scientific committees.  You could list any study in the AllTrials database and it could lead to FDA approval whether it was positive, equivocal or negative.

My biggest objection to the APA using its vanishing street cred to sign on to this feelgood initiative is that doesn't do me any good as an APA member, it doesn't do my patients any good and it has no implications for the future of the field.  It is like signing on to any feelgood initiative, you seem to get credit along with all of the other do-gooders, and all you have to do for it is sign your name.  That would bother me a lot less if I did not pay the APA $935/year for professional membership and if they would return my calls once in a while.  It would also bother me a lot less if they actually addressed real problems that their membership was concerned about.  The kind of heavy lifting that might really cost something.  Ignoring these problems is also costing them something right now in terms of members who are walking away.  It doesn't take too many members walking away at $935 apiece to have an impact on the organization.   I have discussed the problems many times before on this blog but here are a few items that should be APA priorities:


1.  MOC/MOL -

The maintenance of certification issue is not fading away with benign neglect at this time.  Interestingly, the revolution in this area is being led by the generally more conservative internists and internal medicine specialists who have eloquently described how little sense it makes to oppress the most oppressed and most accountable professionals out there and pretend that "the public" is demanding more testing and arbitrary exercises to maintain board certification.  As if that is not enough, the idea that this MOC can be converted into a necessary step in licensure (Maintenance Of Licensure would really put a lock on that unnecessary industry.  Until very recently the APA has been completely deaf to member's efforts in the area.  I don't know some recent interest reflects members voting with their feet and just walking away or the message that some specialists are not going to cave in on this issue and will go so far as starting their own organization for MOC.

2.  Managed care - utilization review -

Managed care companies and pharmaceutical benefit managers harass psychiatrists to  greater degree than other physicians.  These companies have destroyed the infrastructure for inpatient care and any concept of quality in psychiatric care.  Practically all psychiatric care in this country is now dictated by these companies and the arbitrary rules they have set in place to ration it.  The APA made some initial attempts to explain managed care and advise their members about how to "get along" with these methods.  At no point was it suggested that there was a severe ethical problem with allowing for profit companies to dictate psychiatric care.  At no point was there any strategy to illustrate the difference between quality care and rationed care.  Instead, we read stories of the mentally ill being incarcerated by the thousands and the inappropriate care they receive in jail.

3.  Managed care - PBMs -

Billions of dollars are wasted every year as pharmaceutical benefit managers ration generic drugs and tie up physicians and their office staff in order to make more profits.  There is no other group of professionals anywhere who are basically forced to work for a managed care company for free in order to help them ration medications and turn a profit.  The only action I have seen from organized psychiatry was a half measure about a standard prior authorization form that for some reason could never be adequately enacted.  There is always something within federal law that favors managed care companies and gets in the way of addressing this.

4.  Organization wide support for models of care other than collaborative care -

There are massive problems with the collaborative care model that is being promoted by the APA along with SAMHSA, the managed care industry and their partners in government.  The hype is at about the same level as the promotion of the managed care industry was in the 1980s and '90s.  It is obvious how that turned out.  The reason people train as specialists is to provide specialty care, not to sit in a primary care clinic and supervise the prescription of antidepressants based on a rating scale without personally assessing patients.  The real pipe dream here is that primary care clinics under the accountable care organization model will hire psychiatrists to provide the academically proven cost savings to their primary care clinics.  I guess that was a media moment lost on the APA; the models used to hype the care model and sell it to politicians are not the ones eventually implemented by the "managers".   Expect the same rationing and either the complete elimination of psychiatric services to save money or psychiatrists offering their services on their own.  The APA should at least be prepared for that and be involved in preparing their members.

5.  Lack of recent professional guidelines -

Contrary to what a lot of people think, the DSM-5 has nothing to do with treatment.  The APA has treatment guidelines, most of which are in need of a serious update.  They also need guidelines to cover most specific practice situations such as the treatment of aggressive or suicidal patients.  A serious update and an ongoing effort to stay current is necessary in order to prevent the illusion that some guidelines put together by a business organization is as good as and somehow represents professional standards.


These are a few of the things that psychiatrists and the members of the APA need right now!  I doubt that any of us are going to be impressed with the sign on to the AllTrials initiative.  I would not be surprised to find out that most APA members haven't heard about it.  I am a member and I am on the APA listserv and the APA Facebook feed and did not get any notification about this happening.

I don't see anything wrong with the APA signing on to a feelgood initiative on the face of it.  But over the years the membership has paid a lot of money for an organization that supports professional education, standards and advocacy.  Signing a mass petition for an initiative that is not likely to do anything to advance the science or patient care is a politically correct symbolic gesture.

We need a lot more than that and we have for years.



George Dawson, MD, DFAPA

Tuesday, October 28, 2014

Non-adherence And Other Reasons To Doubt Clinical Trials

In a word - adherence or what we used to call compliance.  That is whether the drug is taken at all or whether it is being taken according the the prescription.  Adherence is an important aspect of the working alliance with any physician.  The only accurate way to determine a response to a medication depends on whether it is being taken correctly and the person taking the medication is describing a true treatment response and true side effects.   Many patients  do not take the medication as prescribed for various reasons.  The question of adherence takes on an equally important role in clinical trials.  If non-adherence is present it affects the statistical power of the study.  Sample sizes are often calculated to bring the study into a range where the sample is considered to have an adequate number of subjects.  Non-adherence rates can adversely affect that estimation.  There is also the question of whether non-adherence reflects another issue related to the medication like side effects.   The related research question is whether non-adherence is evenly distributed across the groups taking the study drug and the placebo group.  In previous discussions of the placebo response in drug trials of psychiatric drugs I have not seen adherence discussed as a critical factor.

A brief news report in Science this week by Kelly Servick provides a good discussion of the adherence issue from a number of perspectives.  The central graphic is from a paper by Blashke, et al (2) showing summary data of electronic monitoring of medication adherence from 95 clinical trials that shows decreased adherence rates in terms of taking the medication and taking the medication as prescribed.  Both fall off significantly over time.   By 100 days 20% of subjects have stopped taking the medication and about 30% are no longer taking it as prescribed.  Those are substantial numbers especially if the active drug can be identified by a specific effect or side effect and discontinued on that basis.  In a field where there is a significant placebo response among subjects with mild to moderate illness non-adherence can lead to significant problems in the final outcome and overall worth of the study.

According to Servick the typical approach used in the past has been to recruit enough subjects to counter the low adherence rates.  This is problematic for a number of reasons.  Subjects these days are often from nonclinical samples.  On college campuses this can be a problem with some subjects volunteering for multiple studies.  With psychiatric drug trials, the recruitment criteria are subjective, obvious, and selection is often coordinated by non-physician research coordinators whose job it is to get the required number of volunteers  in a specified period of time.   In the drug trials that I am personally aware of only Alzheimer's disease trials asked for corroboration from sources other than the patients on how they were able to function on a daily basis.  It would be very interesting to obtain that kind of data on subjects recruited from University campuses who were still attending classes especially if some incentive was involved.  In a related matter, one of the investigators in this area created a database to identify potential subjects who came in for screening at various sites where he was an investigator.  Up to 7.78% of the subjects across 9 sites were identified as duplicates (3).   Because of the potential negative effect of duplicate subjects the authors suggest that a nation wide database of subject should be considered.

The article looks at a number of measures to determine the level of adherence in a study.  The first take home message is that pill counts are relatively meaningless.  I have certainly talked with research subjects who told me that their blister packs were empty because they just threw the pills away.  In a study that compared pills counts with blood levels of the drug the sample sample had an adherence rate  of 92% by pill count but only 70% by blood levels.  The author cites medication side effects as a reason for non-adherence, but in a complex sample of patients with varying levels of motivation and insight the reasons can be very complex.  Several electronic approaches to adherence have been devised that vary from a chip in the pill bottle cap that records when the bottle is opened and closed (MEMS system) to a chip in a pill that records when it is ingested.

Adherence measures are another dimension to look for when reading the results of clinical trials.   I don't recall seeing any commentary on this important issue in Cochrane reviews and probably with good reason.   Non-adherence rates this high are probably at least as important as what Cochrane typically discusses as technical problems like small sample size and measurement problems.  Blaschke is quoted in the Servick article that many of the researchers in this area feel that the problem is bigger than one that can be detected by surveillance and databases.  To me this comes down to the limitations of clinical trials and a problem that cannot be potentially solved.  Certainly the days of research units where subjects could be supervised in inpatient settings for months is gone.  In most cases, persons with severe psychiatric disorders can only get that kind of treatment if they can personally pay for it or the state they live in has a state psychiatric facility.  Even then they often have to undergo civil commitment.  A practical solution would be to eliminate the obviously non-adherent subjects and not include them in any intent-to-treat analysis and use a standard adherence measure such as blood levels where appropriate.  Ambivalence about taking a drug in a research protocol is not the same thing as stopping an FDA approved drug in a clinical setting, but that conscious state has not been adequately studied.


George Dawson, MD, DFAPA

Supplementary 1:  In researching this article I was very pleased to find the full text of Blaschke, et al online but also a reference to the National Academy of Sciences Committee on National Statistics.  That site contains a report The Prevention and Treatment of Missing Data in Clinical Trials that was references in the  original Science article.  The full article can be obtained at no cost form that site with registration.


1: Servick K. Nonadherence: a bitter pill for drug trials. Science 17 October 2014: Vol. 346 no. 6207 pp. 288-289  DOI: 10.1126/science.346.6207.288

2: Blaschke TF, Osterberg L, Vrijens B, Urquhart J. Adherence to medications: insights arising from studies on the unreliable link between prescribed and actual drug dosing histories. Annu Rev Pharmacol Toxicol. 2012;52:275-301. doi:10.1146/annurev-pharmtox-011711-113247. Epub 2011 Sep 19. Review. PubMed PMID: 21942628

3: Shiovitz TM, Wilcox CS, Gevorgyan L, Shawkat A. CNS sites cooperate to detect duplicate subjects with a clinical trial subject registry. Innov Clin Neurosci. 2013 Feb;10(2):17-21. PubMed PMID: 23556138.

4: Czobor P, Skolnick P. The secrets of a successful clinical trial: compliance, compliance, and compliance. Mol Interv. 2011 Apr;11(2):107-10. doi: 10.1124/mi.11.2.8. PubMed PMID: 21540470; PubMed Central PMCID: PMC3109858.

5:  General search on adherence related articles