Monday, February 11, 2019

Stochastic Processes In Human Biology





I was reading an important research article recently and encountered a term that I had not see in a while.  That word was stochastic. Physical science and engineering undergraduates probably encounter these words with different frequencies - most often as the expression stochastic process. As a chemistry and biology major, the main contact I had with the term was in what is typically considered the most mathematical of undergraduate chemistry courses - Physical Chemistry (PChem).  In the days I was an undergraduate we used a text by Moore.  In striving to stay in touch with the field I got a copy of a more current PChem text by Atkins several years ago.  The main reference to stochastic processes in both is the random walk problem.  In Moore it is in the problem set for the chapter on Kinetic Theory and and in Atkins it is in a separate supplement at the end.  That supplement is entitled Random Walk, illustrates how a random walk in one dimension can be used to derive an equation that estimates the probability of a particle being at a distance from the origin in a specified time during diffusion.  I won't include the equation here since there are a number of books and online sites where the derivation is available.

The authors of the research article were using it to describe a component of neuronal activity in the reward center of rats.  In their discussion of the term, they point out that brain complexity precludes the prediction of final outcomes from initial states. In the paper the authors were faced with explaining why a specific population of mice continued to self-stimulate dopaminergic neurons in the ventral tegmental area (VTA) via a optogenetic dopamine- neuron self-stimulation (oDASS) via an optic fiber despite punishment while another group of mice did not.  The details are included in the graphic at the top of this post.  To quote the authors on this phenomenon:

"Why only a fraction of mice lose control remains to be determined; the emergence of the two groups is even more surprising given the high degree of genetic homogeneity of the mouse line used here (our Datcre (also known as Slc6a3cre) mice were backcrossed for more than ten generations into the C57BL/6J mouse line), and may reflect a case of stochastic individuality" (p. 320)

One of the key elements of this type of experimentation is minimizing variance form genetic and environmental factors.  The mice used in this experiment are inbred for several generations and in this case had a homozygous knock-in variation for dopamine transporter (DAT).  Colonies are developed to raise heterozygous mice with a greater expression of DAT for experimentation on these systems.  The idea behind backcrossing the mice for more than ten generations reduces the phenotypic variation but as noted in reference 2, there have been 30 years of experimentation of inbreeding mice and that reduces the phenotypic variance by 20-30%.  Although the C57BL/6J mouse line has its DNA characterized - I could not find any numbers for shared DNA on an individual to individual mouse basis.  As an example, in humans the following percentages of shared DNA would be expected identical twins (100%), parent-child (50%), grandparent-grandchild (25%), and first cousins (7.3-13.8%).

The experiment looked at these mice bred for a low degree of phenotypic variation who were all rained in similar environments.  All of the mice were trained for oDASS and then  subjected to foot shock as a punishment.  As noted in the graphic, 60% persevered despite the punishment and 40% renounced or had a marked decrease in oDASS after punishment. Both of these responses were considered distinct behavioral phenotypes.  The underlying molecular mechanisms were also studied.

The paper by Honegger and de Bivort (2) takes a more detailed look at examples of stochasticity. From a behavioral standpoint, they define stochastic individuality as non-heritable inter-organism behavioral variation.  The lack of concordance between identical twins is cited as a prime example. Beyond genomics they argue that stochastic individuality implies that if we know all about the basis of a trait at the -omics level and all of the relevant environmental factors – the behaviors will remain "beyond reliable prediction".  They review some examples from the animal kingdom. Marbled crayfish and pea aphids reproduce by apomictic thelytoky - that is all of the individuals are female and reproduction is clonal - therefore all individuals have the same genome.  Despite this individual display significant variation in locomotion and social behavior. They consider nutritional variances, self reinforcing circuits involving nutrition and behavior, and variation in biophysical developmental processes as possible mechanisms.  In the case of mice, highly inbred mice raised in the same environment vary significantly in their exploration of the environment. Individual mice that actively explore the environment have more robust hippocampal neurogenesis.

The authors detail and number of possible mechanisms underlying stochastic individuality including:

1.  A small number of effects at the molecular level promotes stochasticity.  Random fluctuations in RNA polymerase binding to DNA over time leads to fluctuating mRNA in the cell over time. There are also a host of possible mechanisms operating on small gene sets that lead to large combinations of possible outcomes.  An example given is T-cell biology in humans.  T-cells are essential for normal immunity in humans and the system can use a number of genetic mechanisms to generate a a very large number of T-cell receptor types (1015 to 1020 ) from a small number of genes (4).  The authors of this article discuss the fact that T-cell receptor diversity cannot be estimated by a priori assumptions suggesting this is a stochastic rather than deterministic process.

2.  Positive feedback in gene networks amplify fluctuations and can cause jumping between discrete states or bistability.  An example from the literature is cell quiescence, and there is thought to be a stochastic process involving a specific protein (5,6) that leads to the transition from active proliferation ( a number of states) to quiescence and back again.  

3.  Biological processes are systems that are non-linear, multidimensional, and have significant feedback and therefore are often chaotic.  Initial small differences will be amplified.  There are several disorders that occur with different concordance in identical twins (seizure disorders, cardiac arrhythmias) that could occur as the consequence of this process.  Biological systems that move large amounts of information across these systems are most susceptible.

4.  Somatic mosaicism – as cellular differentiation and proliferation occur genomic alteration can occur in new cells or their progeny. Transposons are active mobile DNA elements that can excise and re-insert into new positions in the genome leading to different cell genetics and tissues from the same progenitor cells.  In my current neurobiology lecture I discuss various level of brain complexity and give examples of 90 subtypes of potassium channels and significant numbers of vesicle trafficking proteins (7) in neurons.  Wilhelm, et al (7) reconstruct a 3D model of a synapse that contains 60 different in varying copy numbers. In the supplementary data they investigate a total of 100 different synaptic proteins and show that the copy numbers of each protein vary from 10 to 22,000 (Fig. S5).

The complexity of the synaptic proteins and their number may seem impressive and have the complexity to produce an apparent stochastic result. Looking at what might happen if these proteins are all modifiable by one of the mechanisms in the Honegger and de Bivort paper is more impressive.  The authors discuss alternative gene splice forms using a Drosophila example.  Proteins are typically formed in eukaryotic cells when the exon portions of the gene are cut and spliced from the intron or noncoding segment of a DNA or corresponding RNA transcript. Alternative splicing involves splicing that occurs in a way that some segments are altered or skipped. The result is multiple proteins being encoded by the same gene. A few examples are illustrated in the following graphic.


  

The nervous system implications of alternative splicing include recognizing that this occurs at every level of neuronal organization (8) and it has already been implicated in neurological diseases like amyotropic lateral sclerosis (ALS).  To get a quick look at what research may have already been done in this area I surveyed Medline for references to "alternative splicing" and "protein isoforms" for all of the neuronal proteins listed in the table of neuronal proteins listed in table S2 from reference 7.  As noted in the table there are probably over a thousand references unevenly distributed across these proteins indicating a significant amount of research on both the mechanism and classification of neuronal protein isoforms - many of which impact neuronal function and may be responsible for stochastic outcomes.

Research On Neuronal Protein Alternative Splicing and Isoforms
Protein
Alternative Splicing references?
"Protein Isoforms"[MeSH Terms] + Protein references?
SNAP 25
+
+
VAMP 2
+
+
α-SNAP
+
+
α/β-Synuclein
+
+
AP 180
+
+
AP 2 (mu2)
+
+
CALM
+
+
Calmodulin
+
+
Clathrin heavy chain
+
+
Clathrin light chain
+
+
Complexin 1,2
-
+
CSP
-
+
Doc2A/B
-
-
Dynamin 1,2,3
-
+
Endophilin I,II,III
-
+
Epsin 1
-
+
Hsc70
-
+
Intersectin 1
+
+
Munc13a
-
-
Munc18a
-
+
NSF
+
+
PIPKIγ
-
+
Rab3
+
+
Rab5a
-
+
Rab7a
-
+
Septin5
+
+
SGIP1
-
-
Synapsin I.II
+
+
Syndapin 1
+
+


At this point I am wrapping up this post with a list of stochastic mechanisms to follow in subsequent posts.  This is an important concept for psychiatrists interested in individual differences in behavior and well as unique conscious states to know about. It has the potential for greatly increasing the explanatory power of why there is so much heterogeneity in presentations of illness and outcomes.  It has the potential for providing a clearer understanding of the neurobiological substrates of behavior.  The distinct behavioral phenotypes in the experiment noted in the beginning of this post is a clear case in point.


George Dawson, MD, DFAPA




References:

1: Pascoli V, Hiver A, Van Zessen R, Loureiro M, Achargui R, Harada M, Flakowski J, Lüscher C. Stochastic synaptic plasticity underlying compulsion in a model of addiction. Nature. 2018 Dec;564(7736):366-371. doi: 10.1038/s41586-018-0789-4. Epub 2018 Dec 19. PubMed PMID: 30568192.

2: Honegger K, de Bivort B. Stochasticity, individuality and behavior. Curr Biol. 2018 Jan 8;28(1):R8-R12. doi: 10.1016/j.cub.2017.11.058. PubMed PMID: 29316423.

3: Ponomarenko EA, Poverennaya EV, Ilgisonis EV, Pyatnitskiy MA, Kopylov AT, Zgoda VG, Lisitsa AV, Archakov AI. The Size of the Human Proteome: The Width and Depth. Int J Anal Chem. 2016;2016:7436849. doi: 10.1155/2016/7436849. Epub 2016 May 19. Review. PubMed PMID: 27298622.

4: Laydon DJ, Bangham CR, Asquith B. Estimating T-cell repertoire diversity:limitations of classical estimators and a new approach. Philos Trans R Soc Lond B Biol Sci. 2015 Aug 19;370(1675). pii: 20140291. doi: 10.1098/rstb.2014.0291. Review. PubMed PMID: 26150657.

5: Yao G. Modelling mammalian cellular quiescence. Interface Focus. 2014 Jun6;4(3):20130074. doi: 10.1098/rsfs.2013.0074. Review. PubMed PMID: 24904737.

6: Lee TJ, Yao G, Bennett DC, Nevins JR, You L. Stochastic E2F activation andreconciliation of phenomenological cell-cycle models. PLoS Biol. 2010 Sep 21;8(9). pii: e1000488. doi: 10.1371/journal.pbio.1000488. PubMed PMID: 20877711.

7:  Wilhelm BG, Mandad S, Truckenbrodt S, Kröhnert K, Schäfer C, Rammner B, KooSJ, Claßen GA, Krauss M, Haucke V, Urlaub H, Rizzoli SO. Composition of isolated synaptic boutons reveals the amounts of vesicle trafficking proteins. Science. 2014 May 30;344(6187):1023-8. doi: 10.1126/science.1252884. PubMed PMID:24876496.

8: Vuong CK, Black DL, Zheng S. The neurogenetics of alternative splicing. NatRev Neurosci. 2016 May;17(5):265-81. doi: 10.1038/nrn.2016.27. Review. PubMedPMID: 27094079.

Supplementary:

The Kyoto Encyclopedia of Genes and Genomes (KEGG) currently lists 40 alternative splicing factors. Link

Graphics Credit:

1:  Top graphic -> me.

2:  Alternate splicing graphic from VisiScience per their purchasing agreement.

Click on either graphic to enlarge.














Tuesday, January 29, 2019

The Laparoscopic Cholecystectomy





I had a laparoscopic cholecystectomy done on January 28.  The indication was possible biliary colic and atypical symptoms of gallstone disease and a gallstone and polyp in the gallbladder noted on abdominal ultrasound.  That scan was done to determine a possible cause of 2 years of bloating and tachycardia.  There was no right upper quadrant pain or colic following fatty foods.  The confounder there is that I hardly eat any fatty foods.  All of my dairy products are fat free.  I have not eaten beef in over 30 years and the only time I eat bacon is as a condiment.  I know the cultural swing in medicine is that fat and cholesterol are now supposed to be "good" for you but I don't buy that and the proof is my fasting lipid profile. I also recently learned that two of my cousins had cholecystectomies at a young age for gallbladder disease so there may be a genetic factor.  The diagnosis did take me by surprise and might illustrate the value of retrospective analysis.

When I presented to my internist with symptoms of abdominal bloating and postprandial tachycardia in the absence of any blood test abnormalities it suggested to him that it was more of a dumping syndrome except there were not associated symptoms or diseases. He set up the ultrasound and called me with the results.  Surgical referral was next. The surgeon suggested that while I did not have "classic" symptoms of gallbladder disease that he had seen varied presentations over the years including the symptoms set that I came in with.  He  suggested cholecystectomy as on option but didn't oversell it: "You may find that those symptoms are not changed after the surgery." He did tell me about a patient who did not want the surgery and managed her illness by diet alone for 20 years until the laparoscopic approach was widely applied.  I thought about it for a few minutes and decided to go with it.

Several factors went into my decision.  First, when I was a freshman in college I had severe appendicitis and had a gangrenous appendix excised and a Penrose drain hanging out of my side for a week that drained the residual tarry remnants of my appendix.  It was the only time in my life I thought I might be better off dead.  Friends visiting me at the time thought I was joking.  So my first thought was that I dodged a bullet at age 18 and I did not want to take that chance again. And then there was my medical school experience. One of my first patients on surgery was an 85 year old man with an ED diagnosis of acute cholecystitis.  He died postoperatively and at autopsy no cause of death was determined.  Could I really afford to try to manage this with diet for the rest of my life and take that kind of chance?  The other patient was a 45 year old woman who presented with an acute abdomen.  In those days imaging not widely applied and the clinical examination was the key determinant. My surgical attending at the time found out that I was considering psychiatry and went out of the way to berate me.  He challenged me to tell him the diagnosis on the patient. He was certain it was acute appendicitis.  I went with the odds and said she had acute cholecystitis.  The surgical team dissected out a normal appendix and he said: "Looks like the medical student was right" and proceeded to make another incision and remove the gallbladder.  I put more weight on the imaging, but from my discussion with the surgeon there is still a fair degree of uncertainty. If I am an old man some day with abdominal pain it might be useful to tell the physician: "I don't have an appendix or a gallbladder".

The surgery itself seemed like a breeze.  I had a brief pre-op discussion the surgeon and the anesthesiologist.  My basic concerns were that I do not get any antibiotic or anesthetic agent that interacts with my existing medication - especially the one that affects cardiac conduction.  They assured me that would not be a problem.  I also told the anesthesiologist that I had never been intubated before or had either inhaled anesthetics or neuromuscular blockade.  I had several minor surgeries where the agents used were fentanyl and midazolam and that seemed to work fine.  I also let him know that I have significant arthritis in the neck and he checked it for range of motion.  They gave me the intravenous pre-anesthetic, wheeled me into the OR and I was out before I could remember anything.  Totally unconscious without a single dream.

In the recovery area I was stoned for about 2 hours. I remember the anesthesiologist coming in and asking me if I had shoulder pain.  Sure enough I had significant right should pain. Two milligrams of Dilaudid (hydromorphone) not only cleared that up but it never returned.  I had both fentanyl and hydromorphone on board and the nurse kept telling me to "take deep breaths" because my oxygen saturations were dropping.  Eventually I woke up completely, got out of bed and started walking around.  I reflected back on my gangrenous appendix experience.  The day after that surgery, it took two nurses to stand me up next to the bed and then my doctor pushed my chest in order to straighten me up. Every step resulted in severe abdominal pain.  That was not the case with this surgery. I had 4 puncture wounds in my abdominal wall but I did not have peritonitis. I was not only moving with ease, but I could also flex my abdominal muscles to get out of bed.  The discharge pain medication was oxycodone 5 my every 4 hours as needed.  I get headaches and mild nausea from it and stopped it and switched to acetaminophen.

Post op day #1 - I felt progressively worse getting home.  It was a general flu-like syndrome with mild nausea.  Whenever I think about flu-like illness I think cytokines. There has been a lot of work on that specifically to cholecystectomies, but none of it seems very specific.  I was also having difficulty voiding despite having to void as a requirement before being discharged form the hospital.  I did not figure out until the next morning that it was probably all part of the physiological changes that occur with abdominal surgery.  That was the most interesting chapter in my undergrad surgery text.  I also wondered about the oxycodone and the intraoperative cephalospoin (cefazolin) given for antibacterial prophylaxis.  I prefer cephalosporins if I need antibiotics and had a similar reaction to cephalexin. With that flu like syndrome I started to get mild tachycardia and blood pressure elevation that I attributed to anxiety about the flu-like symptoms and continued problems with voiding.  All of my temps were normal.  A final significant symptom was episodes of hiccups, a result of the pneumoperitineum  induced to perform abdominal surgery.

Post op day #2 - The voiding problem cleared entirely.  Negligible pain.  I was contacted in follow up by the hospital. They seemed impressed with the lack of pain, especially shoulder pain and advised me to get active and eat foods that might be beneficial for constipation.  The flu-like syndrome seems to be nearly resolved with the exception of some mild facial flushing. At this point it seems like I am on the way to recovery but will post if anything of further interest develops.

I post this experience here to highlight how individual conscious states impact medical decision making - even surgical procedures.  I encountered a number of physicians in this process but the core physicians were my primary care internist, the physician who did the pre-op assessment, and the surgeon.  My primary care internist has known me for about 30 years. He was highly recommend to me by a psychiatrist who worked in the same clinic.  He performed the most thorough examination and gave me a good differential diagnosis at the beginning before ordering the ultrasound.  He knows me well enough to know that I am neurotic but when I have a problem it is usually significant.  I know that I can expect a very thoughtful analysis of the problems and that he will always call me in follow up. The physician doing the pre-op physical has done two other pre-op physicals on me in the past year. He is also thorough and notes my concerns on the pre-op history and physical - but the problem is that nobody seems to read them after that.  The surgeon in this case did a cursory exam but provided me with key information about what to expect up to and including no resolution of symptoms.  He didn't have to tell me about the bad outcomes - I was almost a bad outcome myself.

Everything I read about evidence-based medicine and the corporate standardization of medicine minimizes all of the subjective elements that I have listed above.  I could have seen different physicians at any step in the above sequence and the outcome may have been different.  I could have been a guy who did not have a near death experience with acute appendicitis.  I could have had an internist who told me to try simethicone for gas and not ordered the ultrasound. There is also a level of uncertainty that a lot of people seem unaware of. It is certainly possible that I would die from something else and the gallbladder finding was totally incidental.  UpToDate suggests that is the course for most incidental gallstones 15-25% become symptomatic in 10-15 years of follow up (1).  But was I already symptomatic? Whether it works for the presenting symptoms is still undecided.

There is always room for personal experience and subjectivity in medicine - on the side of the patient as well as the side of the physician.  People often refer to this as the art of medicine. The only parallel with art is the apparent creativity that occurs when unique conscious states are all focused on trying to solve the same problem. Informed consent is often seen as a medico-legal procedure but it also acknowledges the subjective experience of both people in the room. There are probably multiple paths to address a problem - but the usual debates I see suggest that there is only one right one.


George Dawson, MD, DFAPA


Reference:

1.  Salam F Zakko, MD Section Editor:Sanjiv Chopra, MD Deputy Editor: Shilpa Grover, MD.  Overview of gallstone disease in adults.  UpToDate.  Accessed on January 29, 2019.

Graphics Credit:

The above graphic was downloaded from Shutterstock per their standard agreement.




 

Sunday, January 27, 2019

Additional Work On The Review of Systems for Psychiatrists





My post on the review of systems for psychiatrists has had a lot of activity lately so I decided to post my latest update that was designed more for patient completion.  The original goal was to have an ROS that can be used like the one you get when you check in at an internist or surgeon's office.  In the last few years I have been checked in at various offices internists, orthopedic surgeons, ophthalmologists, urologist, and general surgeons) and have seen all of their ROS.  All are relatively brief with just a few symptoms.

The historical context of the ROS is that is seems to have migrated from the physicians exam to the waiting room.  To a skeptic like me the driving force has been the invasion of real medical services by the business and managed care world. That means much less  time with a physician but at the same time requires the physician to document much more. A key piece of that documentation is whether a ROS has been done.  Doing the ROS allows for the physician to bill for a more comprehensive assessment.  Having a completed form by the patient allows incorporation into the note from that day which has essentially become a billing document. I think it is safe to say that has become the primary role of the ROS today.  You will still see your physician for 10 or 15 minutes and they will still ask you some ROS style questions, but they will probably not ask you about the symptoms you endorsed is the waiting room. At least they never asked me.

The traditional role of the ROS is best described in my copy of DeGowin and DeGowin under System Review (Inventory of Systems) (p. 24):

"This is an outline for careful review of the history by inquiring for salient symptoms associated with each system or anatomic region.  Primarily it is a search from symptoms that may have escaped the taking of the present illness.  These symptoms should be memorized and their diagnostic significance learned. In practice, the answers are not written down except when positive. After the physician has fully mastered the outline, we suggest that he ask the questions when he is examining the part of the body to which they pertain...".

This is followed by a list of 20 symptom headline by body symptoms including the Mental status exam as item number 20.

Over the years there have been additional modifications for psychiatric specialists. The item in DeGowin and DeGowin for the mental status exam is really inadequate for psychiatric purposes so that is expanded and is on its own.

There was some ambivalence by members of the profession about the degree of medical care and knowledge required at one point in time.  This led to an expanded "psychiatric review of systems" in some circles that basically looked at symptoms of all of the major categories of psychiatric disorders and counted that as the ROS.  It was modified in some standard interviews that were even used in research.

My methods have always been medically based. That was what my mentors taught me and the patients I have treated over the years required. Psychiatrists can never lose sight of the fact that every medication they prescribe has the potential for causing serious medical illness and side effects. We can also never lose sight of the fact that practically every FDA package insert for a medication has medical contraindications, medical complications, and significant medication x underlying illness interactions. Finally, we deal with common medical morbidities like hypertension with severe long term side effects that cannot be ignored.

It is in that spirit that I offer the ROS.  It covers what I discuss with patients. It has evolved past the original System Review in that it is now common for patients to have detailed information about relevant diagnoses and diagnostic testing that they have had that is directly relevant to their psychiatric care. I have included that where relevant. 

I consider this document to be a starting point.  It is for educational purposes only and I am not recommending that you hand it to all of the patients in your waiting area.  I suggest modifying it for your needs and rewriting for your specific patient populations.  A downloadable Word version is available at the link below.  I dictate all of my evaluations and this form also facilitates that process if you need to list specific systems and positive and negative symptoms.  As an example, if the ROS is completely negative I will list the first three symptoms as negatives in my dictation.

Any feed back from medical psychiatrists would be greatly appreciated.


George Dawson, MD, DFAPA


References:

1: DeGowin EL,  DeGowin RL. Bedside Diagnostic Examination. 3rd Edition.  Macmillan Publishing Co. Inc; New York; 1976: p: 24-26.


Review of Systems As A Word Document 




Saturday, January 12, 2019

ECT Final Rule





My position on electroconvulsive therapy (ECT) is that it is a safe and effective treatment for severe depression, bipolar depression, and catatonia.  It is typically used in the case of treatment resistant depression or acute life threatening psychiatric disorders. In the era prior to modern psychiatric treatment food and water refusal, extreme agitation, uncontrolled aggression, and suicidal behavior all occurred and resulted in excessive mortality. In the case of delirious mania or catatonia, the mortality was estimated as high as 80%.  In the acute care setting where I worked we established a practice where two or three physicians in the group specialized in ECT.  That was done initially because the malpractice premiums were higher for ECT practitioners.  As time went by we were told that from and insurance perspective the risk associated with ECT was not any greater than standard psychiatric practice and all of the premiums became the same.  From an actuarial standpoint ECT was considered a safe procedure.

From a cultural perspective ECT is a highly stigmatized treatment.  There are very few movies where the public gets a realistic perspective of how it is used. More typically it is presented as a punishment or torture rather than a safe and effective medical treatment.  The people I see in consultation are surprised to hear that it is still in use.

Sometime in about 2011, the Food and Drug Administration (FDA) decided to start an initiative about reclassifying ECT devices from Class III (high risk) to Class II (low risk) medical devices.  In their classification system for medical devices, Class III is the most restrictive because it requires premarket approval.  Class II is less restrictive because it can be approved with special controls or recommended measures to mitigate risk.  Class I is least restrictive and requires general controls.  The reason why the FDA initiated this reclassification attempt in 2011 is unclear to me.  The reason why it initially failed is fairly common knowledge. The antipsychiatry movement has been against ECT since before the time of Breggin's protest (1) in the New England Journal of Medicine over a book review by Mandel (2) that discredited his negative assessment of ECT.  A quote from Mandel's review from 40 years ago:

"Dr. Breggin's arguments fail because he uses supporting data uncritically and inaccurately. At a time when reasoned discourse and scientific exchange concerning ECT are needed, he simply calls for the abolition of the treatment on the basis of his personal conclusions. A critical reader will find this book of interest only as an example of how the fires of controversy can be fanned by emotion."

Any time I have attempted to debate the merits of ECT in a public forum (like Twitter) there is a generally trend among the antipsychiatrists to jump on whatever I say and with quote Breggin's book or post links to his dated and inaccurate work.  Irrespective of how the FDA initiative started it did offer some hope that a neutral federal agency could put some of this controversy to rest.

The FDA came out with the Final order on the reclassification of ECT devices on December 26, 2018.  As far as I can tell there was no fanfare.  Deflating controversy typically has that effect.  It took me about three hours to read through the document and it was a very interesting read.  The FDA used various sources to look at the issues of what they abbreviate as SE or safety and effectiveness.  They go though their decision making systematically including a section at the end where they address criticisms of ECT. the FDA, and in some cases professional organizations that suggest ECT is safe and effective like the American Psychiatric Association (APA).  The FDA gives an unequivocal response to these questions "The FDA disagrees with ......."

There is a very interesting section on the most controversial aspect of ECT - memory loss.  I have excerpted the studies and FDA summary statements in the table below:

Reference
FDA Comments (excerpted)
Fernie, G., et al., ‘‘Detecting Objective and Subjective Cognitive Effects of Electroconvulsive Therapy: Intensity, Duration and Test Utility in a Large Clinical Sample.’’ Psychological Medicine, 2014. 44(14): pp. 2985–2994.
Overall, the application of ECT had reversible cognitive deficiencies compared to preECT treatment scores, a measure of safety, and in some assessments (CANTAB, subjective reports of memory function, and MMSE) showed patient improvement.
Kirov, G.G., et al., ‘‘Evaluation of Cumulative Cognitive Deficits from Electroconvulsive Therapy.’’ British Journal of Psychiatry, 2016. 208(3): pp. 266–270.
Not all subjects were capable of performing all tests and parts of the battery changed over time. Results (linear mixed regression analyses) demonstrated that age, severity of depression at the time of testing, and number of days since the last ECT session were the major factors affecting cognitive performance, but the total number of previous ECT sessions did not have a measurable impact on cognitive performance, which further supports the safety of ECT in not leading to cumulative cognitive deficits.
Maric, N.P., et al., ‘‘The Acute and Medium-Term Effects of Treatment with Electroconvulsive Therapy on Memory in Patients with Major Depressive Disorder.’’ Psychological Medicine, 2016. 46(4): pp. 797–806.
At the same time, the neuropsychological tests did not detect any significant memory impairment and showed improvement on visual memory and learning at 1 month and in the immediate post-treatment period, indicating no prolonged or significant ECT-related memory deficits. These improvements correlated with improvement in depression while serious adverse events were not reported.
Spaans, H.P., et al., ‘‘Efficacy and Cognitive Side Effects After Brief Pulse and Ultrabrief Pulse Right Unilateral Electroconvulsive Therapy for Major Depression: A Randomized, DoubleBlind, Controlled Study.’’ Journal of Clinical Psychiatry, 2013. 74(11): pp. e1029–1036.
No significant difference was seen in retrograde amnesia between the two treatment groups. Change in recall performance and fluency tests were also similar between the two groups. There was not a significant difference in performance in the cognitive tests following ECT for any of the cognitive tests during the course of study. The authors also reported mitigating adverse effects on cognition by lengthening the time between treatments to provide patients with more time to recuperate, thereby further characterizing how ECT treatment can be applied safely.
Ghaziuddin, N., et al., ‘‘Cognitive Side Effects of Electroconvulsive Therapy in Adolescents.’’ Journal of Child Adolescent Psychopharmacology, 2000. 10(4): pp. 269–276
The comparison of pre-ECT and the immediate post-ECT testing demonstrated significant impairments of concentration and attention, verbal and visual-delayed recall, and verbal fluency. A complete recovery of these functions was noted in the cognitive testing conducted at 8.5 months. There was no deficit in the ability to problem solve during the initial or the subsequent testing. Cognitive parameters found to be impaired during the first few days of ECT were recovered over several months following the treatment. Therefore, there was no evidence of long-term damage to concentration, attention, verbal and visual memory, or verbal fluency. There were also no impairments of motor strength and executive processing, even during the early (within 7 to 10 days) post-ECT period.

The FDA considered over 400 scientific papers for this reclassification of ECT.  The examples of their conclusory statements about memory related problems in the table above is consistent with clinical practice. They are careful to point out that there are reports of some people who state they have had some permanent memory loss but they do not get into the potential explanations for that phenomenon. They emphasize that the FDA's role is to comment on safety and efficacy and not purported neurobiological mechanisms.

There is an extensive comments section where the FDA summarizes arguments from hundreds of comments and answers them definitively. Many of these comments are right out of the antipsychiatry playbook.  Consider the comment below from page page 66113.  This is a direct excerpt from the Federal Register:

(Comment 8) Several comments indicated that ECT should be banned. Several comments characterized ECT as inhumane. Commenters indicated that the United Nations Special Rapporteur on Torture and Other Cruel Inhuman or Degrading Treatment or Punishment February 16, 2013, defined ECT without consent as torture.

(Response 8) FDA disagrees that ECT should be banned. Section 516 of the FD&C Act (21 U.S.C. 360f) authorizes FDA to ban a device when, based on all available data and information, FDA finds that the device ‘‘presents substantial deception or an unreasonable and substantial risk of illness or injury.’’ During review of the scientific evidence, FDA did not identify sufficient evidence to ban ECT. FDA determined that special controls, in combination with general controls, can mitigate the identified risks of ECT for certain intended uses and mitigate risks associated with ECT use. FDA determined that there is a reasonable assurance of SE for ECT treatment for the identified indications for use and patient populations. Therefore, FDA has determined that ECT does not present substantial deception or an unreasonable and substantial risk of illness or injury

The FDA in its disagreement with most of these negative comments - is in line with psychiatric practice and the obvious facts that if ECT was ineffective or resulted in significant injury - psychiatrists would not be using it or doing more extensive research on similar neurostimulation techniques to make neuromodulation as noninvasive and effective as possible.

All things considered this was a very positive statement about ECT. The FDA points out the limitations of their regulatory scope.  For example, even though they did not include an extensive list of conditions as indications for ECT - they acknowledge that once a device is approved it can be used for off-label conditions. They are also careful to point out that they are not endorsing ECT as the treatment of choice for the named conditions and it is never used that way. Their regulatory language specifies  "treatment resistant" and "require rapid response" as specifying the clinical population for which ECT benefits out weigh the risks.

I don't see psychiatrists having any problem with that language since that is the population we have always used this modality for.


George Dawson, MD, DFAPA



References:

1: Breggin PR. Electroconvulsive therapy for depression. N Engl J Med. 1980 Nov27;303(22):1305-6. PubMed PMID: 7421975.

2: Mandel MR.  Electroshock: Its brain-disabling effects (book review). August 14, 1980
N Engl J Med 1980; 303:402 DOI: 10.1056/NEJM198008143030721.

3: Food and Drug Administration, HHS. Neurological Devices; Reclassification of Electroconvulsive Therapy Devices; Effective Date of Requirement for Premarket Approval for Electroconvulsive Therapy Devices for Certain Specified Intended Uses. Final order. Fed Regist. 2018 Dec 26;83(246):66103-24. PubMed PMID:30596410.

.

Supplementary:

Brandolini's Law is well illustrated by the tactics used by antipsychiatrists in any public forum. In an ideal world the FDA document would be the definitive word on ECT.  I am not that optimistic but encourage people to bookmark this place in the Federal Register and refer to it if you see heated debates about ECT, especially where it is being portrayed as being toxic and/or ineffective.



Wednesday, December 26, 2018

What Does Carl Sagan's Observation About The Effects of Medicine Really Mean?





I started reading Carl Sagan's book The Demon Haunted World based on what I heard about it on Twitter.  Of course I was very familiar with his work based on his television persona but none of his writings. Part of the way in, I ran across the above quote as a footnote on page 13. It hit me immediately based both on my personal medical care and also the thousands of people I have assessed over the years.  There was also a clear contrast with what is in the popular and professional press.  The media has been obsessed with metrics of medical systems for various reasons. First, it is sensational.  When they can get a hold of a controversial statistic like the estimated number of people killed by medical interventions every year.  Change that to the equivalent number of 757 crashes per year and you have a hot headline.  Second, generalizations about the quality of health care lack granularity and precision.  There are no epidemiological studies that I am aware of that can even begin to answer the question he asked at the dinner party.

From a research standpoint, researching this endpoint would take an unprecedented level of detail. Standard clinical trials, epidemiological studies, and public health statistics look at mortality as an outcome typically of few variables and limited age groups. Nobody publishes any clear data on avoiding mortality - often many times over the course of a lifetime.  The same is true about avoided morbidity. Researchers seem focused on binary outcomes - life or death, cured or ill, recovered or permanently disabled.

The closest literature seems to be the way that chronic illnesses accumulate over time but that endpoint is not as striking as a mortality endpoint.  Every study that I have seen looks at specific cause of mortality or a collection of similar causes and not all possible causes.  The best longitudinal data I have found is in a graph in this article in the Lancet (see figure 1).  Please inspect this graph at the link and notice the trend in the number of chronic illnesses and how they increase with age.  The population studied here was in Scotland.  This is very impressive work because as far as my research goes I can find no other graph of chronic illnesses with this level of detail.  Data in the US have very crude age groups and the number of chronic illness is often limited.  If I look at CMS data for Medicare beneficiaries 34.5% have 0-1 chronic medical conditions, 29.5% have 2-3 conditions, 20.7% have 4 to 5 conditions, and 15.3 have 6+ conditions.  By selecting certain age groups the percentages change in the expected directions.  For example, looking at beneficiaries less than 65 years of age, 45% have 0-1 conditions and 11.2% have 6+ conditions.

Chronic conditions are the most significant part of medical effort and expenditure, but as indicated in Sagan's quote - they are only a part of the medical experience of people across their lifetimes. Now that some previously fatal conditions are being treated as chronic conditions the demarcation between morbidity and mortality is blurred even further. The experience of being treated and cured for a potentially fatal illness is left out of the picture.  Being treated and cured from multiple fatal conditions over the course of a lifetime is also not captured.  Two common examples are acute appendicitis and acute cholecystitis.   These diagnoses alone account for 280,000 appendectomies and 600,000 cholecystectomies each year. The vast majority of those people go on to live normal lives after the surgery.  Those surgical treatments are just a portion of the surgery performed each year that is curative and results in no further disability and in many situations the prevention of significant mortality and morbidity.  A more complete metric of life saving, disability preventing, and disease course modification would be most useful to determine what works in the long run and where the priorities should be.

One way to get at those dimensions would be to look at what I would call the Sagan Index.  Since I am sure that there are interests out there licensing and otherwise protecting Carl Sagan's name I am going to use the term Astronomer Index instead - but make no mistake about it - the concept is his.

I have included an initial draft of what this index might look like in the supplementary notes below.  Higher number correlates with the number of times a life has been saved or disability prevented.  Consider the following example - an average guy in his 60s.  When I take his history he recalls being hospitalized for anaphylaxis at age 16 and a gangrenous appendix at 19.  With the appendectomy he has a Penrose drain in his side and had a complicated hospital stay.  He traveled to Africa in his 20's where he got peptic ulcer disease, malaria and 2 additional episodes of anaphylaxis from vaccines that he was allergic to. When he was 42 he had an acute esophageal obstruction and needed an emergency esophagoduodenoscopy. At age 45 he insisted on treatment for hypertension.  At 50 he had polysomnography, was diagnosed with obstructive sleep apnea (OSA) and treated with continuous positive airway pressure (CPAP). At age 55 he had multiple episodes of atrial fibrillation and needed to be cardioverted twice.  He is on long term medication to prevent atrial fibrillation.  At age 60 he had an acute retinal detachment and needed emergency retinal surgery.  At age 66 he needed prostate surgery because of prostatic hypertrophy and urinary tract obstruction.  Assigning a point for either saving his life or preventing disability would yield a Astronomer Index of 11.  In 5 of these situations he was likely dead without medical intervention (3 episodes of anaphylaxis, gangrenous appendix, and acute esophageal obstruction)  and in the other six he would be partially blind, acutely ill with a urinary tract obstruction and possible renal failure, experiencing the cardiac side effects (or sudden death) from OSA, or possibly a stroke with disabling neurological deficits from the untreated atrial fibrillation. The index would take all of these situations into account as well as life threatening episodes of psychiatric illness or substance use.

Compare the Astronomer Index (AI) to all of the media stories about the number of medical errors that kill people.  Preventing medical errors is an essential goal but it really does not give the average person a measure of how many times things go right.  When I see stories about how many planes full of people die each year because of medical errors I think of a couple of things.  The first is a NEJM article that came out in response to the IOM estimate of people dying from medical errors.  It described the progress that had been made and what problems might be associated with the IOM report.  The second thing I think of is the tremendous number of saves that I have seen in the patients I treat.  I have to take a comprehensive medical history on any new patients that I assess and I have talked with many people who would score very highly on the AI.

Another aspect of treatment captured by this index would be the stark reality of medical treatment. The vast majority of people realize this and do not take any type of medical treatment lightly. There is a broad array of responses to these decisions ranging from rational decision making to denying the severity of the problem. Everyone undergoing medical treatment at some point faces a decision with varying degrees of risk. That should be evident from the current television direct-to-consumer pharmaceutical ads that rapidly list the serious side effects including death every time the commercial runs.  The decisions that people make also have to answer the serious question of what their life would be like without the treatment and associated risk. There are no risk free treatments as far as I know. In the case of the hypothetical patient, he has taken at least 11 significant risks in consenting to medical and surgical treatment that have paid off by living into his seventh decade.

I have not seen any metric like AI applied across the population.  Certainly there are many people who make it into their 60s and have fewer problems than in our example but there are also many who have more and actual disability.  The available epidemiology of chronic, cured, and partially cured conditions is extremely limited and I don't see anything that comes close to a metric that captures an individuals lifetime experience like the AI index might.  The rate of change in the index over the lifespan of the individual and across different populations might provide detailed information important for both prevention and service provision. In terms of psychiatric treatment - a good research question would be the response of people to treatment for psychiatric or substance use disorders with high scores on the index to people with low scores. Is there a potential correlation with cognitive decline?

The bottom line for me is that life is hard and most of us sustain considerable damage to our organism over the course of a lifetime.  Only a small portion of that is covered in most medical and epidemiological studies.  This index might provide the needed detail.  It might also provide some perspective on how many times each of us need serious medical treatment over the course of a lifetime.


George Dawson, MD, DFAPA


Reference:

1. Karen Barnett, Stewart W Mercer, Michael Norbury, Graham Watt, Sally Wyke, Bruce Guthrie,
Epidemiology of multimorbidity and implications for health care, research, and medical education: a cross-sectional study, The Lancet, Volume 380, Issue 9836, 2012, Pages 37-43,

Supplementary:

A copy of the Astronomer Index (AI) is shown below:




Sunday, December 16, 2018

Morning Report





I don't know if they still call it that or not - but back in the day when I was an intern Morning Report was a meeting of all of the admitting residents with the attendings or Chief of Internal Medicine.  The goal was to review the admissions from the previous night, the initial management, and the scientific and clinical basis for that management. Depending on where you trained, the relationship between house staff and attendings could be affiliative or antagonistic. In affiliative settings, the attendings would guide the residents in terms of management and the most current research that applied to the condition. In the antagonistic settings, the attendings would ask an endless series of questions until the resident presenting the case either fell silent or excelled.  It was extremely difficult to excel because the questions were often of the "guess what I am thinking" nature. The residents who I worked with were all hell bent on excelling.  After admitting 10 or 20 patients they would head to the library and try to pull the latest relevant research.  They may have only slept 30 minutes the night before but they were ready to match wits with the attendings in the morning.

Part of that process was discussing the relevant literature and references.  In those days there were often copies of the relevant research and beyond that seminar and research projects that focused on patient care. I still remember having to give seminars on gram negative bacterial meningitis and anaphylaxis.  One of my first patients had adenocarcinoma of unknown origin in his humerus and the attending wanted to know what I had read about it two days later.  I had a list of 20 references. All of that reading and research required going to a library and pulling the articles in those days.  There was no online access.  But even when there was - the process among attendings, residents, and medical students has not substantially changed.

I was more than a little shocked to hear that process referred to as "intuition based medicine" in a recent opinion piece in the New England Journal of Medicine (1).  In this article the authors seem to suggest that there was no evidence based medicine at all.  We were all just randomly moving about and hoping to accumulate enough relevant clinical experience over the years so that we could make intuitive decisions about patient care.  I have been critical of these weekly opinion pieces in the NEJM for some time, but this one seems to strike an all time low. Not only were the decisions 35 years ago based on the available research, but there were often clinical trials being conducted on active hospital services - something that rarely happens today now that most medicine is under corporate control.

Part of the author's premise here is that evidence-based medicine (EBM) was some kind of an advance over intuition-based medicine and now it is clear that it is not all that it is cracked up to be. That premise is clearly wrong because there was never any intuition based medicine before what they demarcate as the EBM period. Secondly, anyone trained in medicine in the last 40 years knew what the problems with EBM were from the outset - there would never be enough clinical trials of adequate size to include the real patients that people were seeing.  I didn't have to wait to read all of the negative Cochrane Collaboration studies saying this in their conclusions.  I knew this because of my training, especially training in how to research problems relevant to my patients. EBM was always a buzzword that seemed to indicate some hallowed process that the average physician was ignorant of.  That is only true if you completely devalue the training of physicians before the glory days of EBM.

The authors suggest that interpersonal medicine is what is now needed. In other words the relationship between the physician and patient (and caregivers) and their social context is relevant.  Specifically the influence the physician has on these folks.  Interpersonal medicine "requires recognition and codification of the skills that enable clinicians to effect change in their patients, and tools for realizing those skills systematically." They see it as the next phase in "expanding the knowledge base in patient care" extending EBM rather than rejecting it.  The focus  will be on social and behavioral aspects of care rather than just the biophysical. The obvious connection to biopsychosocial models will not be lost on psychiatrists.  That is straight out of both interpersonal psychotherapy (Sullivan, Klerman, Weissman, Rounsaville, Chevron) and the model itself by Engel.  Are the authors really suggesting that this was also not a focus in the past?

Every history and physical form or dictation that I ever had to complete contained a family history section and a social history section.  That was true if the patient was a medical-surgical patient or a psychiatric patient.  Suggesting that the interpersonal, social, and behavioral aspects of patient care have been omitted is revisionism that is as serious as the idea of intuition based medicine existing before EMB.

I don't understand why the authors just can't face the facts and acknowledge the serious problems with EBM and the reasons why it has not lived up to the hype.  There needs to be a physician there to figure out what it means and be an active intermediary to protect the patient against the shortfalls of both the treatment and the data. As far as interpersonal medicine goes that has been around as long as I have been practicing as well.  Patients do better with a primary care physician and seeing a physician who knows them and cares for them over time. They are more likely to take that physician's advice.  Contrary to managed care propaganda (from about the same era as EBM) current health care systems fragment care, make it unaffordable, and waste a huge amount of physician time taking them away from relationships with patients.

Their solution is that physicians can be taught to communicate with patients and then measured on patient outcomes.  This is basically a managed care process applied to less tangible outcomes than whether a particular medication is started. In other words, it is soft data that it is easier to blame physicians for.  In this section they mention that one of the author's works for Press Ganey - a company that markets communication modules to health care providers. I was actually the recipient of such a module that was intended to teach me how to introduce myself to patients. The last time I took that course was in an introductory course to patient interviewing in 1978.  I would not have passed the oral boards in psychiatry in 1988 if I did not know how to introduce myself to a patient.  And yet here I was in the 21st century taking a mandatory course on how to introduce myself after I have done it tens of thousands of times.  I guess I have passed the first step toward the new world of interpersonal medicine.  I have boldly stepped beyond evidence based medicine.   

I hope there is a lot of eye rolling and gasping going on as physicians read this opinion piece.  But I am also concerned that there is not. Do younger generations of physicians just accept this fiction as fact?  Do they really think that senior physicians are that clueless?  Are they all accepting a corporate model where what you learn in medical school is meaningless compared to a watered down corporate approach that contains a tiny fraction of what you know about the subject?

It is probably easier to accept all of this revisionist history if you never had to sit across from a dead serious attending at 7AM, present ten cases and the associated literature and then get quizzed on all of that during the next three hours of rounding on patients.
 



George Dawson, MD, DFAPA


References:

1: Chang S, Lee TH. Beyond Evidence-Based Medicine. N Engl J Med. 2018 Nov 22;379(21):
1983-1985. doi: 10.1056/NEJMp1806984. PubMed PMID: 30462934.


Graphic Credit:

That is the ghost of Milwaukee County General Hospital one of the teaching affiliates of the Medical College of Wisconsin.  It was apparently renamed Doyne Hospital long after I attended  medical school there.  It was demolished in 2001.  I shot this with 35mm Ektachrome walking to medical school one day. The medical school was on the other side of this massive hospital.






Sunday, December 9, 2018

What Isn't Available In Multimillion Dollar EHRs? Decision Support from 1994


Physician Decision Support Software from the 20th Century




I used to teach a class in medical informatics. My emphasis was not mistaking a physical illness for a psychiatric one and also not missing any medical comorbidity in identified psychiatric patients.  The class was all about decision-making, heuristics, and recognition of biases that cause errors in medical decisions. Bayesian analysis and inductive reasoning was a big part of course. About that time, software packages were also available to assist in diagnostic decisions. Some of them had detailed weighting estimates to show the relative importance of diagnostic features.  It was possible to enter a set of diagnostic features and get a listing of probable diagnoses for further exploration. I printed out some examples for class discussions and we also reviewed research papers and look at the issue of pattern recognition by different medical specialists.

The available software packages of the day were reviewed in the New England Journal of Medicine (1).  In that review, 10 experts came up with 15 cases as written summaries and then those cases were cross checked for validity and pared down to 105 cases.  The four software programs (QMR, Iliad, Dxplain, and Meditel) were compared in their abilities to suggest the correct diagnosis. Two of programs used Bayesian algorithms and two used non-Bayesian algorithms. The authors point out that probability estimates varied based on literature clinical data used to establish probabilities. In the test, the developers of each program were used to enter the diagnostic language and the compared outcomes were the list of diagnoses produced by each program. The diagnoses were rank ordered according likelihood.

The metrics used to compare the programs was correct diagnosis, comprehensiveness (in terms of the differential diagnosis list generated), rank, and relevance.  Only 0.73-0.91 of the programs had all of the cited diagnoses in the knowledge base. Of the programs 0.52 - 0.71 made the correct diagnosis across all 105 cases and 0.71-0.89 made the correct diagnosis across 63 cases.  The 63 case list was used because those diagnoses were listed in all 4 knowledge bases.  The authors concluded the lists generated had low sensitivity and specificity but that unique diagnoses were suggested that the experts agreed may be important. They concluded that the performance of these programs in clinical settings being used by physicians was a necessary next step. They speculated that physicians may use these programs beyond generating diagnoses but also looking at specific findings and how that might affect the differential diagnosis.

A study (2) came out five years later that was a direct head-to-head comparison of two different physicians using QMR software to assess 154 internal medicine admissions where there was no known diagnosis.  In this study physician A obtained the correct diagnosis in 62 (40%) cases and physician B was correct in 56 (36%) of the cases. That difference was not statistically significant. Only 137 cases had the diagnosis listed in the QMR knowledge base. Correcting for that difference, correct diagnoses increased to 45% for physician A and 41% for physician B. The authors concluded that a correct diagnosis Listed in the top five diagnoses 36 to 40% of the time was not accurate enough for a clinical setting, but they suggested that expanding the knowledge base would probably improve that rate.

Since then the preferred description of this software has become differential diagnosis generators (DDX) (3.4). A paper from 2012, looked at a total of 23 of these programs but eventually included only 4 for in their analysis. The programs were tested on ten consecutive diagnosis-focused cases chosen from from 2010 editions of the Case Records of the New England Journal of Medicine (NEJM) and the Medical Knowledge Self Assessment Program (MKSAP), version 14, of the American College of Physicians. A 0-5 scoring system was developed that encompassed the range of 1= diagnosis suggested on the first screen or first 20 suggestions to 5= no suggestions close to the target diagnosis. The scoring range was 0-50. Two of the programs exactly matched the diagnosis 9 and 10 times respectively. These same two programs DxPlain and Isabel had identical mean scores of 3.45 and were described as performing well. There was a question of integration with EHRs but the authors thought that these programs would be useful for education and decision support. They mention a program in development that automatically incorporates available EHR data and generates a list of diagnoses even without clinician input.

The most recent paper (4) looked at a a systemic review and meta-analysis of differential diagnosis (DDX) generators. In the introductory section of this paper the authors quote a 15% figure for the rates of diagnostic errors in most areas of medicine. A larger problem is that 30-50% of patients seeking primary care or specialty consultation do not get an explanation for their presenting symptoms. They looked at the ability to generate correct lists of diagnosis, whether the programs were as good as clinicians, whether the programs could improved the clinicians list of differential diagnoses, and the practical aspects of using DDX generators in clinical practice. The inclusion criteria resulted in 36 articles comparing 11 DDX programs (see Table 2.)  The original paper contains a Forest Plot of the results of the DDX generators showing variable (but in some cases high) accuracy but also a high degree of heterogeneity across studies.  The authors conclude that there is insufficient evidence to recommend DDX generators based on the variable quality and results noted in this study.  But I wonder if that is really true.  Some of the DDX generators did much better than others and one of them (Isabel) refers to this study in their own advertising literature.

My main point in this post is to illustrate that these DDX generators have been around for nearly 30 years and the majority of very expensive electronic health record (EHR) installations have none.  The ones that do are often in systems where they are actively being studied by physicians in that group or one has been added and the integration in the system is questionable.  In other words, do all of the clinical features import into the DDX generator so that the responsible clinician can look at the list without making that decision.  At least one paper in this literature suggests that eliminates the bias of deciding on whether to not to make the decision to use diagnostic assistance.  In discussion of physician workflow, it would seem that would be an ideal situation unless the software stopped the workflow like practically all drug interaction checking software.

The drug interaction software may be a good place to start. Some of these program and much more intelligent than others. In the one I am currently using trazodone x any serotonergic medication is a hard stop and I have to produce a flurry of mouse clicks to move on.  More intelligent programs do not stop the workflow for this interaction of SSRI x bupropion interactions.  There is also the question of where artificial intelligence (AI) fits in.  There is a steady stream of headlines about how AI can make medical diagnoses better than physicians and yet there is no AI implementation in EHRs designed to assist physicians.  What would AI have to say about the above drug interactions? Would it still stop my work process and cause me to check a number of exception boxes? Would it be able to produce an aggregate score of all such prescriptions in an EHR and provide a probability statement for a specific clinical population?  The quality of clinical decisions could only improve with that information. 

And there is the issue of what psychiatrists would use a DDX generator for?  The current crop has a definite internal medicine bias.  Psychiatrists and neurologists need an entirely different diagnostic landscape mapped out.  The intersection of psychiatric syndromes, toxidromes and primary neurological disorders needs to be added and expanded upon. As an experiment, I am currently experimenting with the Isabel package and need to figure out the best way to use it.  My experimental paradigm is a patient recently started on lithium who develops an elevated creatinine, but was also started on cephalexin a few days after the lithium was started.  Entering all of those features seems to produce a random list of diagnoses and the question of whether an increasing creatinine is due to lithium or cephalexin.  It appears that the way the diagnostic features are entered may affect the outcome.

Decision support is supposed to be a feature of the modern electronic health record (EHR). The reality is the only decision support is a drug interaction feature that varies greatly in quality from system to system. Both the drug interaction software and DDX generators are very inexpensive options for clinicians.  EHRs don't seem to get 1990s software right.  And it does lead to the question: "Why are EHRs so expensive and why do they lack appropriate technical support for physicians?" 

Probably because they were really not built for physicians.



George Dawson, MD, DFAPA


References:

1: Berner ES, Webster GD, Shugerman AA, Jackson JR, Algina J, Baker AL, Ball EV, Cobbs CG, Dennis VW, Frenkel EP, et al. Performance of four computer-based diagnostic systems. N Engl J Med. 1994 Jun 23;330(25):1792-6. PubMed PMID:8190157.

2: Lemaire JB, Schaefer JP, Martin LA, Faris P, Ainslie MD, Hull RD. Effectiveness of the Quick Medical Reference as a diagnostic tool. CMAJ. 1999 Sep 21;161(6):725-8. PubMed PMID: 10513280.

3: Bond WF, Schwartz LM, Weaver KR, Levick D, Giuliano M, Graber ML. Differential diagnosis generators: an evaluation of currently available computer programs. J Gen Intern Med. 2012 Feb;27(2):213-9. doi: 10.1007/s11606-011-1804-8. Review. PubMed PMID: 21789717.

4: Riches N, Panagioti M, Alam R, Cheraghi-Sohi S, Campbell S, Esmail A, Bower P.The Effectiveness of Electronic Differential Diagnoses (DDX) Generators: A Systematic Review and Meta-Analysis. PLoS One. 2016 Mar 8;11(3):e0148991. doi: 10.1371/journal.pone.0148991. eCollection 2016. Review. PubMed PMID: 26954234.