Showing posts with label pattern matching diagnosis. Show all posts
Showing posts with label pattern matching diagnosis. Show all posts

Sunday, February 7, 2016

Anecdotes Are Not Evidence And Other "Evidence-Based" Fairy Tales



A lot depends on the kind of anecdote that you are talking about.  When we talk about anecdotes in medicine that generally means a case or a series of cases.  In the era of "evidence based medicine" this is considered to be weak or non-existent evidence.  Even 30 years ago when I was rounding with a group of medicine or surgery residents: "That's anecdotal..." became a popular part of roundsmanship as a way to put down someone basing their opinion on a case or series of cases.  At the time, series of cases were still acceptable for publications in many mainstream medical journals.  Since then there has been and inexorable march toward evidence-based medicine and that evidence is invariably clinical trials or meta-analyses of clinical trials.  In some cases the meta-analyses can be interpreted in a number of ways including interpretations that are opposed to the interpretation of the author.  It is easy to do if a biologically heterogeneous condition is being studied.

There has been some literature supporting the anecdotal but it is fairly thin.   Aronson and Hauben considered the issue of drug safety.  They made the point that isolated adverse drug reactions are rarely studied at a larger epidemiological level following the initial observation.  They argued that they could establish criteria for a definitive adverse event based on a single observation.  In the case of the adverse drug event they said the following criteria could be considered "definitive":  extracellular or intracellular deposition of the drug, a specific anatomic location or pattern of injury, physiological dysfunction or direct tissue damage, or infection as a result of an infective agent due to contamination (1).  They provide specific examples using drugs and conclude: "anecdotes can, under the right circumstances, be of high quality and can serve as powerful evidence.  In some cases other evidence may be more useful than a randomized clinical trial.  And combining randomized trials with observational studies, or with observational studies and case series, can sometimes yield information that is not in clinical trials alone."  This is essentially the basis for post marketing surveillance by the Food and Drug Administration (FDA).  In that case, monitoring adverse events on a population wide basis, increases the likelihood of finding rare but serious adverse events compared with the the original trial.

Enkin and Jadad had a paper that also considered anecdotal evidence as opposed to formal research (2).   They briefly review the value of informal observation and the associated heuristics, but also point out that these same observations and heuristics can lead some people to adhere to irrational beliefs.  The values of formal research is a check on this process when patterns emerge at a larger scale.  Several experts have applied Bayesian analysis to single case results, to illustrate how pre-existing data can be applied to single cases analyses with a high degree of success.  Pauker and Kopelman (3) looked at a case of hypertension and when to consider a search for secondary causes - in this case pheochromocytoma.  They made the interesting observation that:

"Because the probability of pheochromocytoma in a patient with poorly controlled hypertension who
does not have weight loss, paroxysms of sweating, or palpitations is not precisely known, the clinician often makes an intuitive estimate.  But the heuristics (rules of thumb) we use to judge the likelihood of diseases such as pheochromocytoma may well produce a substantial overestimate, because of the salient features of the tumor, its therapeutic importance, and the intellectual attraction of making the diagnosis."   

They take the reader through a complex presentation of hypertension and likelihood ratios used to analyze it and conclude:

"Hoofbeats usually signal the presence of horses, but the judicious application of Bayes' rule can help prevent clinicians from being trampled by a stampeding herd that occasionally includes a zebra."

In other words, by using Bayes rule,  you won't subject patients with common conditions to excessive (and risky) testing in order to not miss an uncommon condition and you won't miss the uncommon condition.  Looking at the data that supports or refutes that condition will make it clear, if you have an idea about the larger probabilities.

How does all of this apply to psychiatry?  Consider a few vignettes:

Case 1:  A psychiatric consultant is asked to assess a patient by a medicine team.  The patient is a 42 year old man who just underwent cardiac angiography.  The angiogram was negative for coronary artery disease and no intervention was necessary.  Shortly afterwards the patient becomes acutely agitated and the intern notices that bipolar disorder is listed in the electronic health record.  He calls the consultant and suggests an urgent consult and possible transfer to psychiatry for treatment of an acute manic episode.  The consultant arrives to find a man sitting up in bed, appearing very angry and tearful and shaking his head from side to side.

Case 2:  A 62 year old woman is seen in an outpatient clinic for treatment resistant depression.  She has been depressed for 20 years and had had a significant number of medication changes before being referred to a psychiatric clinic.  All of the medications were prescribed by her primary care physician.  She gives the history that she gets an annual physical exam done each year and they have all been negative.  Except for fatigue and some lightheadedness, her review of systems is all negative.  She is taking lisinopril for hypertension, but has no history of cardiac disease.  She has had electrocardiograms (ECGs) in the past but no other cardiac testing.  The psychiatrist discusses the possibility of a tricyclic antidepressant.

Case 3:  An inpatient psychiatrist has just admitted a 46 year old woman who is concerned about the FBI.  She has been working and functioning at a high level until about a month ago when she started to notice red automobiles coming past her cul de sac at a regular frequency.  She remembered getting into an argument at work about American military interventions in the Middle East.  She made it very clear that she did not support this policy.  She started to make the connection between the red automobiles and the argument at work.  She concluded that an employee must have "turned her in" to the federal authorities.  These vehicles were probably the FBI or Homeland Security.  She noticed clicking noises on her iPhone and knew it had been cloned.  She stopped going into work and sat in the dark with the lights out.  She reasoned it would be easier to spot somebody breaking into her home by the trail of light they would leave.   There is absolutely no evidence of a mood disorder, substance use disorder, or family history of psychiatric problems.  She agrees to testing and all brain imaging and laboratory studies are normal.  She agrees to a trial of low dose antipsychotic medication, but does not tolerate 3 different medications at low doses.

These are just a very few examples of the types of clinical problems that psychiatrists encounter on a daily basis that require some level of analysis and intervention.  A psychiatrist over the course of a career is encountering 20 or 30 of these scenarios a day and ends up making tens of thousands of these decisions.  What is the "evidence basis" of these decisions?  There really is nothing beyond anecdotes with the availability of various strengths of confirmation.  What kinds of evidence would the evidence based crowd like to see?  In Case 1, a study that looked at behavioral disturbances after cardiac catheterization would be necessary, although Bayes would suggest the likelihood of that occurring would be very low.  In Case 2, a large trial of treatment approaches to 62 year old women with depression and fatigue would be useful.  I suppose another trial of what kinds of laboratory testing might be necessary although much of the literature suggests that fatigue is very nonspecific.  In most cases where patients are being seen in primary care - fatigue and depression are indistinguishable.  Extensive testing, even for newer inflammatory markers yields very little.  Further on the negative side,  evidence-based authorities suggest that a routine physical examination and screening tests really adds nothing to disease prevention or long term well being.  Case 3 is more interesting.  Here we have a case of paranoid psychosis that cannot be treated due to the patient experiencing intolerable side effects to the usual medication.  Every practicing psychiatrist knows that a significant number of people can't take entire classes of medications.  Here we clearly need a clinical trial of 40 year old women with paranoid psychoses who could not tolerate low dose antipsychotic medication.

By now my point should be obvious.  None of the suggested trials exist and they never will.  It is very difficult to match specific patients and their problems to clinical trials.  Some of the clinical occurrences are so rare (agitation after angiography for example) that it is very doubtful that enough subjects could be recruited and enrolled in a timely manner.  And there is the expense.  There are very few sources that would fund such a study and these days, very few practice environments or clinical researchers that would be interested in the work.  Practice environments these days are practically all managed care environments where physician employees spend much of their time administrative work, the company views clinical data as proprietary, and research is frequently focused on advertising and marketing rather than answering useful clinical questions.

That brings us to the larger story of what the "evidence" is?  The anecdotes that everyone seems to complain about are really the foundation of clinical methods.  Training in medicine is required to experience these anecdotes as patterns to be recognized and classified for future work.  They are much more than defective heuristics.  How does that work?  Consider case #1.  The psychiatric consultant in this case sees an agitated and tearful man who appears to be in distress.  The medicine team sees a diagnosis of bipolar disorder and concludes the patient is having an acute episode of mood disturbance.  The consultant quickly determines the fact that the changes are acute and rejects the medical team's hypothesis that this is acute mania.  After about 5 questions he realizes that the patient is unable to answer, pulls a pen out of his pocket, and asks the patient to name of the pen.  When he is not able to do this, he performs a neurological exam and determines the patient has right arm weakness and hyperreflexia.  An MRI scan confirms the area of an embolic stroke and the patient is transferred to neurology rather than psychiatry.  The entire diagnostic process is based on the past anecdotal experience of diagnosing and treating neurological patients as a medical student, intern, and throughout his career.  Not to labor the point (too much) - it is not based on a clinical trial or a Cochrane review.                

The idea that the practice of medicine comes down to a collection of clinical trials that that are broken down according to a proprietary boilerplate and generally conclude that the quality of most studies is low and therefore there is not much to draw conclusions on is absurd.  Trusting meta-analyses for answers is equally problematic.  You might hope for some prior probability estimates for Bayesian analysis.  But you will find very little in the endless debate (4) about whether or not antidepressants work or they are dangerous.  You will find nothing when these studies are in the hands of journalists who are not schooled in how to practice medicine and know nothing about treating biologically heterogeneous populations and unique individuals.  No matter how many times the evidence based crowd tells you that you are treating the herd - a physician knows that if they treat the herd - it is one person at a time.  They also know that screening the herd can lead to more problems than solutions.

Treating the herd would not allow you to make a diagnosis of complete heart block and immediate referral to Cardiology for pacemaker placement (Case 2) or psychotherapy with no medications at all and eventual recovery (Case 3).  If you accept the results of many clinical trials or their interpretation by journalists - you might conclude that your chances of recovery from a complex disorder are not better than chance.  There is nothing further from the truth.

That is why most of us practice medicine.



George Dawson, MD, DLFAPA


References:

1:   Aronson JK, Hauben M. Anecdotes that provide definitive evidence. BMJ. 2006 Dec 16;333(7581):1267-9. Review. PubMed PMID: 17170419; PubMed Central PMCID: PMC1702478.

2:  Enkin MW, Jadad AR. Using anecdotal information in evidence-based health care:  heresy or necessity? Ann Oncol. 1998 Sep;9(9):963-6. Review. PubMed PMID: 9818068.

3:  Pauker SG, Kopelman RI. Interpreting hoofbeats: can Bayes help clear the haze? N Engl J Med. 1992 Oct 1;327(14):1009-13. PubMed PMID: 1298225.

4:  Scott Alexander.  SSRIS: MUCH MORE THAN YOU WANTED TO KNOW.  Slate Star Codex posted on July 7, 2014.



Attribution:

Photo at the top is Copyright Gudkov Andrey and downloaded from Shutterstock per their Standard License Agreement on 2/3/2016.

Tuesday, June 3, 2014

The Issue With Patient Management Problems

So-called patient management problems have been building up on us over the past 30 years.  I first encountered them in the old Scientific American Medicine Text.  They are currently used for CME and more importantly, Maintenance of Certification.  To nonphysicans reading this they are basically hypothetical patient encounters that claim to be able to rate your responses to fragments of the entire patient story in such a way that it is a legitimate measure of your clinical acumen.  I am skeptical of that claim at best and hope to illustrate why.

Consider a recent patient management problem for psychiatrists in the most recent issue of Focus, the continuing education journal of the American Psychiatric Association (APA).  I like Focus and consider it to be a first rate source of the usual didactic continuing medical education (CME) materials.  Read the article, recognize the concepts and take the CME test.  This edition emphasized the recognition and appropriate treatment of Bipolar II Disorder and it provided an excellent summary of recent clinical trials and treatment recommendations. The patient management problem was similarly focused.  It began with a brief descriptions of a young women with depression, low energy, and hypersomnia.  It listed some of her past treatment experience and then listed for the consideration of the reader, several possible points in the differential diagnosis including depression and bipolar disorder, but also hypersomnia-NOS, obstructive sleep apnea, disorder and a substance abuse problem.  I may not be the typical psychiatrist but after a few bits of information, I would not be speculating on a substance abuse problem and would not know what to make of a hypersomnia-NOS differential diagnosis.  I would also  not be building a tree structure of parallel differential diagnoses in my mind.  Like most experts, I have found that the best way to proceed is to move form one clump of data to the next and not go through and exhaustive checklist or series of parallel considerations.  The other property of expert diagnosticians is their pattern matching ability.  Pattern matching consists of rapid recognition of diagnostic features based on past experience and matching them to those cases, treatments and outcomes.  Pattern matching also leads to rapid rule outs based on incongruous features, like an allegedly manic patient with aphasia rather than a formal thought disorder.

 If I see a pattern that looks like it may be bipolar disorder, the feature that I immediately hone in on is whether or not the patient has ever had a manic episode.  That is true whether they tell me that they have a diagnosis of bipolar disorder or not.  I am looking for a plausible description of a manic episode and the less cued that description the better.  I have seen evaluations that in some cases say: "The patient does not meet criteria for bipolar disorder."  I don't really care whether the specific DSM-5 criteria are asked or not or whether the patient has read them.  I need to hear a pretty good description of a manic episode, before I start asking them about specific details.  I should have enough interview skills to get at that description.  The description of that manic episode should also meet actual time criteria for mania.  Not one hour or four hours but at least 4 days of a clear disturbance in mood.  I recall reading a paper by Angst, one of Europe's foremost authorities on bipolar disorder when he proposed that time criteria based on close follow up of his research patients and I have been using it ever since.  In my experience practically all substance induced episodes of hypomania never meet the time criteria for a hypomanic episode.  There is also the research observation that many depressed patient have brief episodes of hypomania, but do not meet criteria for bipolar disorder.  I am really focused on this cluster of data.

On the patient management problem, I would not get full credit for my thinking because I am only concerned about hypersomnia when I proceed to that clump of sleep related data and I am only concerned about substance use problems when I proceed to that clump of data.  The patient management problem seems more like a standardized reading comprehension test with the added element that you have to guess what the author is thinking.

The differential diagnosis points are carried forward until additional history rules them out and only bipolar II depression remains.  At that point the treatment options are considered, three for major depression (an antidepressant that had been previously tried, an antidepressant combination, electroconvulsive therapy, and quetiapine) and one for bipolar II depression.  The whole point of the previous review is that existing evidence points to the need to avoid antidepressants in acute treatment and that the existing relatively weak data favors quetiapine.  The patient in this case is described as a slender stylishly dressed young woman.  What is the likelihood that she is going to want to take a medication that increases her appetite and weight?  What happens when that point comes up in the informed consent discussion?

The real issue is that you don't really need a physician who can pass a reading comprehension test.  By the time a person gets to medical school they have passed many reading comprehension tests.  You want a physician who has been trained to see thousands of patients in their particular specialty so they have a honed pattern matching and pattern completion capability.  You also want a physician who is an expert diagnostician and who thinks like an expert.  Experts do not read paragraphs of data and develop parallel tree structures in their mind for further analysis.  Experts do not approach vague descriptions in a diagnostic manual and act like they are anchor points for differential diagnoses.  Most of all experts do not engage in "guess what I am thinking" scenarios when they are trying to come up with diagnoses.  Their thinking is their own and they know whether it is adequately elaborated or not.

This patient management program also introduced "measurement based care".  Ratings from the Inventory of Depressive Symptomatology (IDS) were 31 or moderately depressed at baseline with improvements to a score of 6 and 4 at follow up.  Having done clinical trials in depression myself,  and having the Hamilton Depression Rating Scores correlated with my global rating score of improvement, I have my doubts about the utility of rating scale scores.  I really doubt their utility when I hear proclamations about how this is some significant advance or more incredibly how it is "malpractice" or not the "standard of care" if you don't give somebody a rating scale and record their number.  In some monitored systems it is even more of a catastrophic if the numbers are not headed in the right direction.  Rating scales of subjective symptoms remain a poor substitute for a detailed examination by an expert and I will continue to hold up the 10 point pain scale as the case in point.  The analysis of the Joint Commission 14 years ago was that this was a "quantitative" approach to pain.  We now know that is not accurate and there is no reason to expect that rating scales are any more of a quantitative approach to depression.

Those are a couple of issues with patient management problems.  The articles also highlight the need for much better pharmacological solutions to bipolar II depression and more research in that area.

George Dawson, MD, DFAPA


Cook IA.  Patient Management Exercise - Psychopharmacology.  Focus Spring 2014, Vol. XII, No. 2: 165-168.

Hsin H, Suppes T.  Psychopharmacology of Bipolar II Depression and Bipolar Depression with Mixed Features.  Focus Spring 2014, Vol. XII, No. 2:  136-145.  

Wednesday, December 4, 2013

My First Flu Shot

I got my very first flu shot on 12/3/2013.  Up until now I have depended on my coworkers being vaccinated and protecting me against the virus.  Very recently I have had Tamiflu and at the times I have used it thought that it worked very well.  I have asked repeatedly about getting the shot, including the Infectious Disease consultants who promoted the mass immunization of my fellow employees.  Over the years I have asked about 5 of them this question and they all said the same thing: "You can never take this flu vaccine."  My history was: "In 1975 I received two doses of anti-rabies duck embryo vaccine and had two episodes of anaphylaxis".  I was very interested in the new vaccine (Flucelvax) for people with egg allergies and when I asked about it, my primary care doc was initially enthusiastic, but then told me I had to be evaluated by Allergy and Immunology in order to get it.  That lead to a comprehensive evaluation that was nearly three hours long.

After the check in and doing some asthma tests, I met the Allergist.  He was about my age and the first thing I noticed was that he was gathering a history in nearly the same way I do.  It was detailed and comprehensive.  Not just the buzz words but what actually happened right down to what that duck embryo vaccine looked like in the syringe.  It was oily and it had particles in it.  Even in those days I was skeptical of the idea that all Peace Corps volunteers going into a specific country needed to take it.  There were about 50 of us and in the two years of service, I don't recall hearing that anyone was bitten by an animal.  The first time I got it, I broke out in hives and had a rash.  My friends took me down to a local Kenyan hospital where they gave me Polaramine (dexchlorpheniramine) and epinephrine.  When I got the second injection, I got intense abdominal cramping, hives, swelling of the face and lips, wheezing and lightheadedness.  At that  point they gave me Benadryl (diphenhydramine) and epinephrine.  Even though I can recall the antihistamine they were using in Kenya at the time, I can't recall why they gave me the second shot.   The Allergist wanted all of these details and more, like when was the first time anything like this happened.

That was 50 years ago.  The anchor point was the JFK assassination.  The day before his funeral I shot myself in the left eye with a BB gun and developed a hyphema.  I was hospitalized for a week and the hemorrhaging resolved completely.  In the follow up, I was in the ophthalmologist's office next to a fish tank.  My face started to swell of to the point that my eyes were swollen shut and my lips were extended.  I developed hives over much of my body.  I started to wheeze.  They moved me into a different room and talked with my mother who told me later that the diagnosis was "psychosomatic reaction".   Apparently the stress of not losing an eye or my vision was felt to be a more likely etiology than a moldy fish tank.  For the next 10 years or so, I start to wheeze when mowing the lawn.  I would get up in the middle of the night with hives or wheezing and drank Diet Pepsi until it went away and I could go back to sleep.  At some point one of the primary care docs in town gave me an epinephrine based inhaler.   I didn't see my first real allergist until I was about 25, after the Peace Corps and working at my first job cloning evergreen trees.

The skin testing began at that point.  96 patch tests up and down my back, all of them very positive.  I was given a long list of what to avoid and it was basically unavoidable.  I began a long series of immunotherapy injections, but gave up when they did not seem to do anything.  I remembered taking TheoDur the entire time I was in medical school and doing a rotation in Allergy and Immunology.  I gave a presentation about what was known about anaphylaxis at the time and at the end, one of the allergists seriously questioned me about why I was going into psychiatry rather than internal medicine.  During residency, I took my first course of prednisone for a flare up of asthma after a viral infection.  Since then, it has been random episodes of spontaneous anaphylaxis, corticosteroid inhalers and trying to minimize my exposure to them when possible, and using antihistamines and an Epi-Pen when the episodes of anaphylaxis seem particularly bad (that is infrequent).  The Allergist recorded this 50 year history of mostly inadequate treatment.

At the same time, I was marking where I would be in an interview with a person who had lifelong depression and anxiety.  Attempting to reconstruct the episodes of mood disorder and what the symptoms were.  Attempting to correlate it with major life events.  Attempting to determine in retrospect the exact nature of the symptoms and likely etiologies at the time.  Asking myself if the treatments received were appropriate or what it suggested.  Thinking about the resilience or vulnerabilities of the person I was talking with.  It is the same process I use in making diagnoses and treatment plans.  Were there differences?  Of course and the most noticeable were the objective measures for assessing asthma.  I did the usual assessments of FEV1.0 before and after bronchodilators.  There was also a new assessment of alveolar nitric oxide (NO) as a measure of asthma  control.  It would be extremely useful to have tests like that to objectively measure the distress, anxiety, or depression levels of the person sitting in front of me, especially if it involved something as simple as blowing into a tube.

But the most interesting part was that in the end, the Allergist addressed the question about whether I could take an egg cultured influenza vaccine by carefully synthesizing the data and correctly answering the question.  He did not need a test of any sort to answer the question.  He took a meticulous 50 year history of a guy with life-long allergies including asthma and anaphylaxis and correctly concluded that I could be given the shot, even though all of the experts with the same level of training had come to the opposite conclusion.  I got the shot, sat in the clinic for 30 minutes.  The information sheet said that delayed reactions for "up to several hours" could occur.  He told me that would not happen and I went home.  That was almost exactly 24 hours ago.

The lesson here is one that I have seen time and time again in the field of medicine.  The information content in the field is vast.  There may be only a certain physician or specialty capable of answering that question.  There is no better example than me getting a flu shot, but it also happens daily in the people I see who have had psychiatric disorders for the same length of time or less than I have been dealing with allergies and asthma.  No two people with asthma or depression are alike.  Meticulous history taking and pattern matching can get to the correct answer.  Suggestions that we can treat a population of people all in the same way will not.

People are biologically complex and as physicians we should celebrate that.  That also involves getting them to the person who can correctly answer their questions.

George Dawson, MD, DFAPA

Sunday, July 28, 2013

Pattern Matching in Psychiatric Diagnosis

I first heard about pattern matching and the importance it has in medical diagnosis over 30 years ago.  A friend of mine who was in medical school at the time told me about one of his professors who was always interested in the Augenblick diagnosis or the diagnosis that  could be arrived at in the blink of an eye.  He gave me examples of several diagnoses that could be either made immediately or within minutes based on a set of features that would lead to immediate associations in the mind of the clinician without an extensive evaluation.

I had many encounters in my medical training with the same phenomenon.  I can recall being on the Infectious Disease consult team and being asked to see a patient with ascites for the possible diagnosis and treatment of spontaneous bacterial peritonitis.  The consultant with an expert in Streptococcal infections and after patiently listening to the resident's presentation he asked what we thought of the rash on the patient's leg.  The patient had lower extremity edema with a slightly erythematous hue and a slight exudate in areas.  What was the diagnosis?  Without skipping a beat the consultant said this was streptococcal cellulitis and suggested sending a sample to the lab for confirmation.  It was subsequently confirmed and treated.  Why was the attending physician able to hone in on and diagnose this rash when it escaped the detection of two Medicine residents and two medical students?  He was an Infectious Disease specialist and that may have biased him in that direction but is there something else?

One of the ways that physicians and probably all classes of diagnosticians arrive at Augenblick diagnoses or efficiently clump and sort through larger amounts of information faster is by pattern matching.  Pattern matching is also the reason why clinical training is necessary to become an adequate diagnostician.  That will not happen with rote learning alone.  It is one thing to read about heart sounds and actually experience them and to have that skill refined by listening to hundreds and thousands of normal hearts and hearts with varying degrees of pathology.  Rashes are classic examples and several studies have documented that the speed and accuracy with which dermatologists can make an accurate diagnosis of a rash is much higher than the average physician.  In pattern matching a recognizable feature of the patient's illness triggers an immediate association with the physicians experiences from the past leading to a facilitated diagnosis.

Probably the best conceptualizations of pattern matching comes from the fields of philosophy and cognitive science.  My favorite author is Andy Clarke and his book Microcognition.  He addresses the issue of biologically relevant cognitive science and the model of parallel distributed processing.  A simplified diagram drawn from this model is shown below:


In this case we have a very practical problem of a patient with known bipolar disorder and a question of whether or not they have had a stroke.  In this case the respective clouds (there are many more) represent collection of features of medical diagnoses that may be relevant to the case.  Unlike a textbook, these features represent a lot of varied information including actual events and nonverbal information like the clinicians past history of diagnosing strokes and caring for people who have had strokes.   Each cloud here can contain hundreds or tens of thousands of these features.  These features are unique aspects of the clinician conscious state and the only way to control for variability between clinicians is to assure that physicians in the same speciality have similar exposure to these experiences in their training.  Even in the ideal situation where all specialists have an identical exposure to the same illness there will be variability based on different levels of ability and other capacities.  An example would be a Medicine resident I worked with whose examination of the heart with a stethoscope predicted the echocardiogram results.  It became kind of a joke on our team at the time that all he had to do was hold his stethoscope in the air in a patient's room and it was as good as an ultrasound.

The basic idea in pattern matching is that the clinician immediately recognizes one of the features they know and that allows for a rapid diagnosis or plan based on that feature.   Looking how that works in the hypothetical case we can look at a few features in the map:


 For the purpose of this discussion consider that our patient B is a 60 year old woman with a 35 year history of known bipolar disorder.  She has known her psychiatrist for years.  One day the husband calls with the concern that the patient seems to have developed a problem with communication.  She seems to be talking in her usual voice but he can't comprehend what she is saying.  She does not appear to be manic or depressed.  The psychiatrist listens to the patient on the phone and concludes that she has a fluent aphasia and recommends that they take her to the emergency department as soon as possible.  Ongoing care requires that the psychiatrist talk with the emergency department physician and hospitalist to make sure that acute stroke is high in their differential diagnosis and eventually go in to the hospital and examine the patient to confirm the diagnosis.

Practically all cases of psychiatric diagnosis require some measure of this pattern matching process with varying degrees of medical acuity.  I would go so far to suggest that it is the most important aspect of the diagnosis.  Keep in mind that the pattern matching also applies to the purely psychiatric part of the diagram.  Despite all of the recent criticism and focus on the DSM 5 the elaboration of pattern matching leads us to several important conclusions:

1.  Psychiatric diagnosis is a much more dynamic process than rote learning from a diagnostic manual.  The average clinician should have many more features of diagnoses than are listed in any manual.

2.  Psychiatric diagnosis requires medical training.  There is no way that our psychiatrist in the example could have made the diagnosis of aphasia and remain involved in the diagnostic process to its conclusion without medical training and previous exposures to these scenarios.

3.  The training implications of these scenarios are not often made explicit.  Every medical student, resident and practicing physician needs to be exposed to a diverse population of patients with problems in their area of expertise in order to develop a pattern matching capability.  They can also benefit by asking attending clinicians about how they made rapid diagnoses, but at that level of training the question is not obvious.

4.  Removing physicians with these capabilities from the diagnostic loop reduces the capability of that loop.  The best example I can continue to think of is the primary care process where the diagnosis and ongoing treatment of depression or anxiety depends on the results of a checklist that the patient completes in less than 5 minutes.  This assumes that there is an entity out there called depression that is based purely on a verbal description and pattern matching is not required.  It actually assumes that there is a population of people with this affliction.  Despite all of the hype about how this is "measurement based care" - I don't think that a single person like that exists.

5.  Pattern matching blurs the line between objective and subjective.  There is often much confusion about this line.  Are there "objective criteria" that can be written in a manual somewhere that captures even the basic essence of diagnosing a stroke in a patient with bipolar disorder?  Is there an "objective" checklist out there somewhere that can capture the problem?  Obviously not.  For some reason people tend to equate "subjective" with "bad" or "unscientific".  In the example given and any similar example, the subjective state with the most experience diagnosing strokes is probably the "best" diagnostician - subjective or not.  An "objective" rating scale doesn't stand a chance.

So consider pattern matching to be an important but unspoken part of the diagnostic process.  For obvious reasons it is more important than diagnostic criteria in a manual.  The most obvious of these reasons is that you really cannot practice medicine without it.

George Dawson, MD, DFAPA

Clark A.  Microcognition.  London, A Bradford Book, 1991.


Monday, May 21, 2012

DSM5 - NEJM Commentaries


I highly recommend the two commentaries in the New England Journal of Medicine this week.  The first was written by McHugh and Slavney and the second by  Friedman.  Like Allen Frances they are experienced psychiatrists and researchers and they are likely to have unique insights.  I may have missed it, but I am not aware of any of these authors using the popular press to make typical political remarks about the DSM.  Those remarks can be seen on an almost weekly basis in any major American newspaper.

McHugh and Slavney focus interestingly enough is the issue of comprehensive diagnosis and opposed to checklist diagnoses.  It reminded me immediately that the public really does not have the historical context of the DSM or how it is used.  It also reminded me of the corrosive effect that managed care and the government has had on psychiatric practice with the use of "templates" to meet coding and billing criteria in the shortest amount of time.   Finally it reminded me of the bizarre situation where we have managed care companies and governments combining to validate the concept of a checklist as a psychiatric diagnosis and court testimony by experts suggesting that it is negligent to not use a checklist in the diagnostic process.

McHugh and Slavney summed up in the following three sentences: “Checklist diagnoses cost less in time and money but fail woefully to correspond with diagnoses derived from comprehensive assessments. They deprive psychiatrists of the sense that they know their patients thoroughly. Moreover, a diagnostic category based on checklists can be promoted by industries or persons seeking to profit from marketing its recognition; indeed, pharmaceutical companies have notoriously promoted several DSM diagnoses in the categories of anxiety and depression.” (p. 1854)

In my home state, the PHQ-9 is mandated by the state of Minnesota to screen all primary care patients being treated for depression and follow their progress despite the fact that this was not the intended purpose of this scale and it is not validated as an outcome measure.  The PHQ-9  is copyrighted by Pfizer pharmaceuticals.

The authors go on to talk about the severe limitations of this approach but at some point they seem to have eliminated the psychiatrist from the equation. I would have concerns if psychiatrists were only taught checklist diagnoses and thought that was the best approach, but I really have never seen that. Politicians, managed care companies, and bureaucrats from both are all enamored with checklists but not psychiatrists. They also talk about the issue of causality and how that could add some additional perspective. They give examples of diagnoses clustered by biological, personality, life encounter, and psychological perspectives. Despite its purported atheoretical basis, the DSM comments on many if not all of these etiologies.

Friedman's essay is focused only on the issue of grief and whether or not DSM5 would allow clinicians to characterize bereavement as a depressive disorder. That is currently prevented by a bereavement exclusion and DSM-IV and apparently there was some discussion of removing it. He discusses the consideration that some bereavement is complicated such as in the situation of a bereaved person with a prior episode of major depression and whether the rates of undertreatment in primary care may place those people at risk of no treatment.

There can be no doubt that reducing a psychiatric diagnosis to a checklist loses a lot of information and probably does not produce the same diagnoses. There is also no doubt that the great majority of grieving persons will recover on their own without any mental health intervention. Both essays seem to minimize the role of psychiatrists who should after all be trained experts in comprehensive diagnoses (the kind without checklists). They should be able to come up with a diagnostic and treatment formulation that is independent of the DSM checklists. They should also be trained in the phenomenology of grief and the psychiatric studies of grief and realize that it is not a psychiatric disorder.  If they were fortunate enough to be trained in Interpersonal Psychotherapy they know the therapeutic goals and treatment strategies of grief counseling and they probably know good resources for the patient.

The critiques by all three authors are legitimate but they are also strong statements for continued comprehensive training of psychiatrists. There really should be no psychiatrist out there using a DSM as a "field guide" for prescribing therapy of any sort based on a checklist diagnosis. Primary care physicians in some states and health plans have been mandated to produce checklist diagnoses.  The public should not accept the idea that a checklist diagnosis is the same as a comprehensive diagnostic interview by psychiatrist.

That is the real issue - not whether or not there is a new DSM.

George Dawson, MD DFAPA



McHugh PR, Slavney PR. Mental illness--comprehensive evaluation or checklist?
N Engl J Med. 2012 May 17;366(20):1853-5.

Friedman RA. Grief, depression, and the DSM-5. N Engl J Med. 2012 May
17;366(20):1855-7.
http://www.nejm.org/doi/full/10.1056/NEJMp1201794?query=TOC