Showing posts with label expertise. Show all posts
Showing posts with label expertise. Show all posts

Sunday, May 3, 2026

Medical Reasoning vs. A Diagnostic Manual

 


I taught a course on medical decision making and how not to mistake a physical illness for a psychiatric disorder from about 1990 to 2002. The main theorists at the time were all internists – Stephen Pauker, Jerome Kassirer, Richard Kopelman, David Eddy, and Harold Sox.  I read their papers and attended their courses.  State-of-the-art in those days involved extensive differential diagnosis, Bayesian analysis, and an awareness of an extensive list of potential cognitive biases. I had been impressed with the need for pattern matching and pattern completion and incorporated all those elements into my course.  I eventually pared it down to about 9 sections in the lecture notes illustrated with case vignettes.

My original emphasis was to recognize that there are several considerations when assessing the medical aspects of psychiatric care.  The first is the medical stability of the patient.  Can they be cared for on a psychiatric unit or do their medical needs require medicine or in some cases surgery?  Do they need referral to a generalist of specialist?  This is more complicated than it sounds because the patient is there seeing a psychiatrist for what is supposed to be a psychiatric problem.  But that presentation is complicated by several factors including most patients have no primary care physician and no routine health care maintenance. Many will come into the emergency department concerned about a medical problem but get sent to psychiatry. In that situation, people still get all of the acute medical illnesses including heart attacks, strokes, asthma attacks, pulmonary emboli, seizures, pneumonia, meningitis, encephalitis, and acute cholecystitis to name a few.  Many exhibit non-specific behaviors like agitation, crying out, aggression, or unresponsiveness that can be due to either a psychiatric disorder or a medical problem.    

The second is a psychiatric presentation of a physical illness in a communicating patient. The classic presentations involve brain pathology that is infection, inflammatory, vascular, trauma, or neurodegenerative.  Systemic endocrinopathies and inflammatory disorders are a close second. 

Finally, there is the patient with a clear psychiatric disorder who has intercurrent illness that is or is not known.  Examples that I have seen many times include current or new onset diabetes mellitus, profound anemia usually secondary to an upper or lower GI bleed, dermatology conditions that have often been neglected, symptomatic nutritional deficiencies (B12, folate, D), sexually transmitted diseases, complications of substance use like cirrhosis, and various acute and chronic infectious diseases.

Given that large population with diverse medical and psychiatric problems as well as diverse presentations that can include denying any physical problems – I typically reviewed how the diagnoses occurred.  Pattern matching was the fastest.  The physician has seen a physical finding, lab, behavior, etc – many times before, knows what it is, diagnoses it and treats it.  A good example is a rash.  Dermatologists are rash experts and can correctly classify rashes and marginal cases much faster than primary care physicians (4).  The same is true for diabetic retinopathy and ophthalmologists (5).  Until you have seen a person with severe mania or catatonia, neuroleptic malignant syndrome, or serotonin syndrome it is less likely that you can diagnose the conditions by reading criteria in a book.  Patterns are important for all medical specialists.

On the other end of the spectrum is the contemplative side of diagnosis.  There are several possible diagnoses, and it takes additional data, thought, and reasoning to come to a final diagnosis. Every medical student does this in their initial internal medicine rotation.  There is encouragement to produce a list of many diagnoses that might account for the presentation – but even as the case is being recorded or presented that list rapidly narrows to the apparent diagnosis.

In psychiatry, it may take much more data and collateral information to make a specific diagnosis at the initial presentation.  First episode psychosis (FEP) is a case in point. It is very important to determine what the symptoms onset was like and whether there were any associated mood symptoms or substance use problems. The patient may not be able to describe the phenomenology and depending on the circumstances treatment may be initiated while the diagnostic process is ongoing.  Teaching about the diagnostic process, we would spend time discussing what that might look like combined with a recursive approach to the patient and an awareness of cognitive and emotional biases.  I provided several examples of non-psychiatric physicians making errors due to emotional biases.

Since my course, the literature on medical decision making has changed to some degree.  There is some literature that addresses expertise in general at both the level of cognitive psychology (1) and neurobiology (2).  The general approaches have been to analyze expertise and diagnostic reasoning from the perspective of typical domains (cognitive, perceptual, motor) or to look at a general model and how that has developed over the years.

A dual processing model (3) is generally considered the best current representation of clinical reasoning and decision making.  In this model, there is a fast automatic, heuristic, and unconscious system called Type 1 and a slower conscious, analytical, and effortful system called Type 2.  Additional properties are indicated in the following table.

Parameter

Type 1

Type 2

Speed

Fast, automatic, unconscious/preconscious, little effort

Slow, deliberate, analytical, varying degrees of effort

Control

Minimum control, similar to automatic associations in everyday life except more focused

Control over thought process and direction

Systems and Processing

Pattern recognition and completion, implicit learning, access to long term memory

Working memory and manipulation of data in working memory, planning and reasoning based on that data

Memory Systems

Long term memory

Short term and working memory

Localization

-Orbitofrontal cortex (OFC)

-Basal ganglia (caudate, putamen)

-Insula

-Anterior cingulate cortex

-Amygdala

-Hippocampus

-Dorsolateral prefrontal cortex (DLPFC)

-Left inferior frontal gyrus

-Middle frontal gyrus

-Inferior parietal lobule

-Precuneus

-Hippocampus

 A clinical example of Type 1 reasoning is when a trained clinician recognizes a classic presentation of a medical illness, diagnosis, or finding.  An example I frequently use is when one of my Infectious Disease attendings who was an expert in Streptococcal infections recognized characteristic rash from across the room on a patient we were consulted for a different problem.  He made the diagnosis within seconds and told us how it could be confirmed.  In studies of the process the orbitofrontal cortex and limbic connections are activated.  Training is a critical element, especially seeing a maximum number of patterns and their variations.  Although the characterization is that this is a fast and automatic process, there is some room for deliberation.  For example, recognizing or attempting to classify equivocal cases without classic presentations. 

Type 2 reasoning is considered more of the typical process of differential diagnosis.  The findings are compared, analyzed, and accepted or rejected based on additional data and clinical judgment. This process is thought to localize in dorsolateral prefrontal cortex (DLPFC) the home of the working memory where data can be maintained and analyzed.  The left inferior frontal gyrus contributes to rule-based reasoning and hypothesis testing.  A clinical example from my experience is the case of the agitated stuporous patient.  These cases require a great deal of caution because they are most likely to represent a serious or life-threatening illness.  It requires a clinician who knows how to examine patients with stupor or coma and rapidly makes sense of the history and findings. It is a problem that can rarely be solved by Type 1 reasoning alone due to a fairly non-specific presentation.  Some of the critical points for hypothesis testing will be signs of increased intracranial pressure, purposeful response to painful stimuli, eye movements, reflex and musculoskeletal exam abnormalities, signs of infection, and meningeal signs.

The interaction between Type 1 and Type 2 systems is not necessarily sequential but it can be with the Type 1 system matching patterns that lead to hypothesis generation.  There is some evidence that in most clinical situations most of the diagnoses occur with Type 1 reasoning.  Experts can operate at the level of Type 1 reasoning due to extensive experience.  There is not necessarily a hard separation based on the properties in the table. Some hypothesis testing can occur at both levels.  Both systems are commonly grounded in both the limbic system and the hippocampus.

The human brain is capable of parallel distributed processing of data or information.  This means that there are many processing areas in the brain that are interconnected and they can all be working at once.  The modern conceptualization is brain networks that are active processing areas connected by white matter tracts widely distributed through the brain.  

That brings me to my model of diagnostic reasoning (see lead graphic and click to enlarge).  It is based on the course I taught, neuroanatomy and neurology, and what I have observed clinically. When I was talking about pattern matching 20 years ago based on my observations and reading studies in dermatology, ophthalmology, radiology, and pathology – the term seemed to fade rapidly from the diagnostic reasoning literature.  It was revived somewhat by the more recent focus on AI and comparison of that modality to humans.

There was a lull in Bayesian analysis after the invention of computerized programs like Quick Medical Reference (QMR) and Iliad.  They were designed to facilitate medical diagnoses by providing an exhaustive list of findings and their probabilities. These were 20th century personal computer programs and not AI.  A study of these and 2 additional programs suggests that the programs got 52-71% of 105 diagnostic cases correct with 19-37% being the mean portion of correct diagnoses (6). Despite those figures the programs provided an additional 2 diagnoses per case that experts considered as relevant.  The authors recommended that the programs be used only by physicians who could include the relevant and exclude the irrelevant information provided by the programs.  The programs were discontinued without further modification or updates.  

That is the 8-mile-high view.  I plan to do a deeper dive into the neuroanatomy and neurophysiology.  But the clear reality of the situation is the ability to make a psychiatric diagnosis resides in the brain of a psychiatrist and not a classification manual or a checklist.   Manuals and checklists are crude approximations of some of the cognitive features that psychiatric experts possess.  Like all experts – skill will vary based on practice, exposure, and interest because of the effects on these brain systems.  But we are well past the point of equating what a psychiatrist does to a crude manual.  A manual never saved or treated anyone.  Further – the diagnostic reasoning process emphasizes elements that are important for education and training. It seems that in the past decades there has been a preoccupation with evidence-based research rather than the evidence itself. It does not do the physician or patient any good to be in a situation where that physician is unable to communicate with a person who is in a critical state and has no idea how to assess that problem.  Rearranging diagnostic criteria in a manual for the ninth or tenth time does not get you there.   

 

George Dawson, MD, DFAPA


Supplementary 1:   Before anyone says the diagram is too complex - it is a general diagram for any human diagnostician.  The main modifications for physicians and psychiatrists are the interactive aspects that include empathic comments, formulations, and numerous verbal interventions that other diagnosticians may not need to use.  The specifics about how these memory systems interact are not known at this point - I will be researching that over the next several months.  I borrowed the superposition concept from quantum mechanics - even though there are no wave functions for memory.         


 References:

.

1:  Bilalić M.  The Neuroscience of Expertise.  Cambridge University Press. Cambridge, United Kingdom. 2017.

2:  Maguire EA, Gadian DG, Johnsrude IS, Good CD, Ashburner J,  Frackowiak RSJ, Frith CD. 2000. Navigation-related structural change in the hippocampi of taxi drivers. Proc Natl Acad Sci USA 97:4398–4403.

3:  Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S. The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking. Acad Med. 2017 Jan;92(1):23-30. doi: 10.1097/ACM.0000000000001421. PMID: 27782919.

4:  Federman DG, Concato J, Kirsner RS. Comparison of dermatologic diagnoses by primary care practitioners and dermatologists. A review of the literature. Arch Fam Med. 1999 Mar-Apr;8(2):170-2. doi: 10.1001/archfami.8.2.170. PMID: 10101989

5:  Sussman EJ, Tsiaras WG, Soper KA. Diagnosis of Diabetic Eye Disease. JAMA. 1982;247(23):3231–3234. doi:10.1001/jama.1982.03320480047025

6:  Berner ES, Webster GD, Shugerman AA, Jackson JR, Algina J, Baker AL, Ball EV, Cobbs CG, Dennis VW, Frenkel EP, et al. Performance of four computer-based diagnostic systems. N Engl J Med. 1994 Jun 23;330(25):1792-6. doi: 10.1056/NEJM199406233302506. PMID: 8190157.

Thursday, June 4, 2015

Information versus Wisdom




I saw this post on another blog today and thought it was a good title.  I end up pondering this idea almost every day.  In medicine these days we are inundated by data scientists on the one hand and administrators on the other.  The data scientists tell us how they are going to revolutionize medicine through their analysis of large data sets.  The theory is that there are patterns in the data that can be detected only with advanced computational methods.  Having gone through the spreadsheet era and seen how easy it is to prove almost any theory with a large spreadsheet, I am very skeptical of Big Data.  Just dredging through the data, looking for patterns and writing it up does not seem very rigorous to me.  It strikes me more like one of the popular TV shows where the agents are in the field but solidly connected to the computer whiz back at headquarters who is capable of pulling up any document, any floor plan, and hacking into any closed circuit TV system in order to get the information that is needed.  I don't think that science works that way.

On the administrative side, it is the worst of times.  The statistical efforts of administrators are frequently laughable attempts to legitimize the next genius idea to come down the pike.  Their mistakes in healthcare are legendary ranging from the promise of the electronic health record to the RVU based management of physicians as widget producers, all exhaustively documented with numbers.  I sat in a meeting one day that showed 95% of the physicians in the department were not "producing" enough to cover their salary.  The problem was that nobody had done the multiplication on "work RVUs".  When the appropriate multiplier was added it was a different story.  Administrators also tend to collect a lot of numbers that they think will be useful for an analysis, without thinking ahead to the data analysis and statistics.  They seem to have no idea about basic statistical analysis much less more advanced analysis like how to legitimately analyze data over time to detect real differences.  There is no better example than the state of Minnesota collecting PHQ-9 scores over time from anyone trying to treat depression in the state.  They seem to think that unconnected collections of those numbers at different points in time will have some kind of meaning. Administrators also have the habit of creating studies that confirm their vision of the world and when those studies are complete - that is all of the "proof" that is necessary.  The entire concept of managed care rests on many of those studies.

On the wisdom side of things I can think of no better example than a colleague who I said goodbye to today.  He worked with me for the past 2 1/2 years.  He is an Internist who is also an Addictionologist and is ABAM (American Board of Addiction Medicine) certified.  He has been a physician since the early days of the HIV/AIDS epidemic and treating those patients was a significant part of his early practice.  He has an encyclopedic knowledge of the care of those patients and how it has evolved as well as being an excellent Internist.  He is interested in psychiatry and can talk in psychoanalytic terms.  He is also an expert in LGBT issues and can speak with authority on that subject.  I certainly did not want to see him go, but for the purposes of this post, I can think of no better example of wisdom that comes with medical practice.  He could be consulted on any number of complex problems in his areas of expertise and provide a very well thought out answer based not so much on information, but on what works and what the potential complications are.  Any physician can tell you that these are the folks you want to work with.  When I think about data mining approaches toward these areas of knowledge, I think about the 31 page document that is available online that looks at the issue of medication interactions of psychotropic and HIV medications.  It is a compulsively great document, but lacks the wisdom to help you pick the best therapy for a manic patient on tenofovir.

Granted my position is a thoroughly biased one.   I make no apologies for wanting to work with physicians who have the greatest technical expertise and know how to apply it.  I don't mean people who can recite facts or even algorithms.  I mean the people who know all of that and can look at the patient with the most complicated medical situation and still come up with a plan of action and how that patient must be closely monitored.  They also know when it is better to do nothing at all and that is a difficult skill to acquire.  Practically everyone leaves medical school and residency with a strong treatment bias.  You are taught to be "aggressive" and that most of the treatments that you do will do some good even if there is not cure.  In clinical practice, that is far from the truth.  In psychiatry for example, you have to recognize that there are certain biological predispositions, clinical patterns, boundaries, and personalities that are the warning signs of disaster with certain treatments.

When I first started in medicine in the 1980s, the wisdom based model was still the predominate model in most clinical settings.  Now it is much less frequent and there are departments that are just looking for people to fill in the gaps.  They don't necessarily want to retain you they just want to "keep the numbers up."  They also don't want you spending a lot of time on complex cases, because the payment rates rapidly decline if you are not shuffling people in and out the door.    When the administrators start recruiting bodies based on their revenue models and Hollywood accounting,  I hope that I will always end up on the side with the wisdom, rather than a heap of useless information.

There is a lot of that going around these days.  



George Dawson, MD, DFAPA      



Supplementary 1:  I was going to jam in a section to comment on emotional and moral reasoning in view of the expected backlash to the Rosenbaum articles in the NEJM, but decided to add it here instead.  It would have strained the above essay.  It has been an interesting (and fully expected) exercise in political rhetoric.  Predictably the critical articles mischaracterize her position and ironically are at least as guilty of the fallacies that they accuse her of using.  In one case, a new fallacy is pretty much invented.  I think it is instructive to note that in these matters, logic goes out the window.  There is no pathway to a sound judgment.  It basically involves rallying the troops to see who can shout the loudest.  My self proclaimed bias above is part of the reason I am firmly on her side (but will refrain from the shouting).  For anyone who thinks like me, there is no convincing me that the appearance of conflict of interest is the same as actual conflict of interest.  There is no convincing me that free pizza and donuts will cause me to blindly prescribe a medication - probably because I have not eaten Big Pharma food since the early 1990s.  In fact, if I think of a more plausible thought experiment about how much cold hard cash it would take to pay me off to prescribe a drug, I can't come up with a figure but it would have to be absolutely stratospheric compared with the usual speaker fees that people are listed in the Sunshine Act database for.  All of that is based on the fact that I work for a living and treat real patients.  I am accountable to those patients.  If a medication does not produce real results or it causes too many side effects (like my early experience with paroxetine) it is off the Dawson formulary and I don't prescribe it again.

This is of course like arguing with Democrats and Republicans.  I know that some pro-appearance of COI=COI will strongly dislike my experience and the way my thoughts on this matter are anchored in the way I practice medicine.  That is the nature of arguing about emotional and moral reasoning in what the Institute of Medicine (IOM) describes as an ethical vacuum.  The recent editorials certainly don't prove a thing.

The usual focus of these debates also leaves out the big picture that many entire University departments (math, science, engineering) actively collaborate with industries and in many cases actively invite industry participation in order to advance those fields.  The notion that physicians are not able to do that because they have a sacred trust to patients and would be somehow compromised remains implausible to me, particularly when nearly all of the major decisions that physicians make in this country have been seriously tampered with if not controlled by managed care companies and pharmaceutical benefit managers for nearly 30 years.

That is a massive conflict of interest that nobody talks about and it affects 80% of all of the healthcare in this country.

Supplementary 2: The graphic at the top of the post is from Shutterstock.