So-called patient management problems have been building up on us over the past 30 years. I first encountered them in the old Scientific American Medicine Text. They are currently used for CME and more importantly, Maintenance of Certification. To nonphysicans reading this they are basically hypothetical patient encounters that claim to be able to rate your responses to fragments of the entire patient story in such a way that it is a legitimate measure of your clinical acumen. I am skeptical of that claim at best and hope to illustrate why.
Consider a recent patient management problem for psychiatrists in the most recent issue of Focus, the continuing education journal of the American Psychiatric Association (APA). I like Focus and consider it to be a first rate source of the usual didactic continuing medical education (CME) materials. Read the article, recognize the concepts and take the CME test. This edition emphasized the recognition and appropriate treatment of Bipolar II Disorder and it provided an excellent summary of recent clinical trials and treatment recommendations. The patient management problem was similarly focused. It began with a brief descriptions of a young women with depression, low energy, and hypersomnia. It listed some of her past treatment experience and then listed for the consideration of the reader, several possible points in the differential diagnosis including depression and bipolar disorder, but also hypersomnia-NOS, obstructive sleep apnea, disorder and a substance abuse problem. I may not be the typical psychiatrist but after a few bits of information, I would not be speculating on a substance abuse problem and would not know what to make of a hypersomnia-NOS differential diagnosis. I would also not be building a tree structure of parallel differential diagnoses in my mind. Like most experts, I have found that the best way to proceed is to move form one clump of data to the next and not go through and exhaustive checklist or series of parallel considerations. The other property of expert diagnosticians is their pattern matching ability. Pattern matching consists of rapid recognition of diagnostic features based on past experience and matching them to those cases, treatments and outcomes. Pattern matching also leads to rapid rule outs based on incongruous features, like an allegedly manic patient with aphasia rather than a formal thought disorder.
If I see a pattern that looks like it may be bipolar disorder, the feature that I immediately hone in on is whether or not the patient has ever had a manic episode. That is true whether they tell me that they have a diagnosis of bipolar disorder or not. I am looking for a plausible description of a manic episode and the less cued that description the better. I have seen evaluations that in some cases say: "The patient does not meet criteria for bipolar disorder." I don't really care whether the specific DSM-5 criteria are asked or not or whether the patient has read them. I need to hear a pretty good description of a manic episode, before I start asking them about specific details. I should have enough interview skills to get at that description. The description of that manic episode should also meet actual time criteria for mania. Not one hour or four hours but at least 4 days of a clear disturbance in mood. I recall reading a paper by Angst, one of Europe's foremost authorities on bipolar disorder when he proposed that time criteria based on close follow up of his research patients and I have been using it ever since. In my experience practically all substance induced episodes of hypomania never meet the time criteria for a hypomanic episode. There is also the research observation that many depressed patient have brief episodes of hypomania, but do not meet criteria for bipolar disorder. I am really focused on this cluster of data.
On the patient management problem, I would not get full credit for my thinking because I am only concerned about hypersomnia when I proceed to that clump of sleep related data and I am only concerned about substance use problems when I proceed to that clump of data. The patient management problem seems more like a standardized reading comprehension test with the added element that you have to guess what the author is thinking.
The differential diagnosis points are carried forward until additional history rules them out and only bipolar II depression remains. At that point the treatment options are considered, three for major depression (an antidepressant that had been previously tried, an antidepressant combination, electroconvulsive therapy, and quetiapine) and one for bipolar II depression. The whole point of the previous review is that existing evidence points to the need to avoid antidepressants in acute treatment and that the existing relatively weak data favors quetiapine. The patient in this case is described as a slender stylishly dressed young woman. What is the likelihood that she is going to want to take a medication that increases her appetite and weight? What happens when that point comes up in the informed consent discussion?
The real issue is that you don't really need a physician who can pass a reading comprehension test. By the time a person gets to medical school they have passed many reading comprehension tests. You want a physician who has been trained to see thousands of patients in their particular specialty so they have a honed pattern matching and pattern completion capability. You also want a physician who is an expert diagnostician and who thinks like an expert. Experts do not read paragraphs of data and develop parallel tree structures in their mind for further analysis. Experts do not approach vague descriptions in a diagnostic manual and act like they are anchor points for differential diagnoses. Most of all experts do not engage in "guess what I am thinking" scenarios when they are trying to come up with diagnoses. Their thinking is their own and they know whether it is adequately elaborated or not.
This patient management program also introduced "measurement based care". Ratings from the Inventory of Depressive Symptomatology (IDS) were 31 or moderately depressed at baseline with improvements to a score of 6 and 4 at follow up. Having done clinical trials in depression myself, and having the Hamilton Depression Rating Scores correlated with my global rating score of improvement, I have my doubts about the utility of rating scale scores. I really doubt their utility when I hear proclamations about how this is some significant advance or more incredibly how it is "malpractice" or not the "standard of care" if you don't give somebody a rating scale and record their number. In some monitored systems it is even more of a catastrophic if the numbers are not headed in the right direction. Rating scales of subjective symptoms remain a poor substitute for a detailed examination by an expert and I will continue to hold up the 10 point pain scale as the case in point. The analysis of the Joint Commission 14 years ago was that this was a "quantitative" approach to pain. We now know that is not accurate and there is no reason to expect that rating scales are any more of a quantitative approach to depression.
Those are a couple of issues with patient management problems. The articles also highlight the need for much better pharmacological solutions to bipolar II depression and more research in that area.
George Dawson, MD, DFAPA
Cook IA. Patient Management Exercise - Psychopharmacology. Focus Spring 2014, Vol. XII, No. 2: 165-168.
Hsin H, Suppes T. Psychopharmacology of Bipolar II Depression and Bipolar Depression with Mixed Features. Focus Spring 2014, Vol. XII, No. 2: 136-145.
Showing posts with label maintenance of certification. Show all posts
Showing posts with label maintenance of certification. Show all posts
Tuesday, June 3, 2014
Monday, November 18, 2013
Evidence based Maintenance of Certification – A Reply to ABMS
The politics of regulating physicians is no different than politics
in general and that typically has nothing to do with scientific evidence. From the outset it was apparent that some people
had the idea that general standardized exams with high pass rates and patient
report exercises would somehow keep all of the specialists in a particular
field up to speed. That assumes they
were not up to speed in the first place.
As a member of a professional organization embroiled in this
controversy it has give me a front row seat to the problems with physician
regulation and how things are never quite what they seem to be. From the outset there was scant evidence that
recertification exams were necessary and with the exams no evidence that I am aware
of that they have accomplished anything.
The American Board of Medical Specialties (ABMS) actually has a page on
their web site devoted to what evidence exists and I encourage anyone to go
there and find any scientific evidence that supports current MOC much less the
approaching freight train of Maintenance of Licensure or linking MOC to annual
relicensing by state medical boards.
Feel free to add that evidence to the comments section for this post.
Prior to this idea there were several specialty
organizations that had their own programs consisting of educational materials
that were self study courses that could be completed on specific topics
relevant to the specialist every year. A
formal proctored examination and all of the examination fees that involves was
not necessary. The course topics were
developed by consensus of the specialists in the field. A couple of years ago I watched a CME course
presentation by a member of the ABMS who pointed out that three specialty
boards (of a total of 24) wanted to continue to use this method for relicensing
and recertification. They were denied
that ability to do that because the ABMS has a rule that all of the Boards have
to use the same procedure that the majority vote on. The problem was that very few of the
physicians regulated by these Boards were aware of the options or even the fact
that there would be a move by the ABMS for a complicated recertification scheme
and that they would also eventually push for it to become part of relicensing
in many states.
If the ABMS is really interested in evidence based practice,
the options to me are very clear. They
currently have no proof that their recertification process is much more than a
public relations initiative. Here is my
proposal. Do an experiment where one half
of the specialists to be examined that year complete a self study course in the
relevant topics for that year. That can
be designated the experimental group.
The other half of the specialists receive no intervention other than
self study on their own for whatever they think might be relevant. Test them all on the topics selected for the
self study group and then compare their test scores. See who does better on the test. Secondary endpoints could be developed to
review the practices of each group and determine whether there are any
substantial differences on secondary measures that are thought to be relevant
in the tested areas.
Until this straightforward experiment is done, the current
plan and policies of the ABMS are all speculative and appear to be based upon
what has been called conventional wisdom.
Conventional wisdom appears to be right because all of the contrary
evidence is ignored. There is no
scientific basis for conventional wisdom and it falls apart under scrutiny. Physicians in America are currently the most
overregulated workers in the world. The
rationale for these regulations is frequently based on needing to weed out the
few who are incompetent, unethical, or physically or mentally unable to
practice medicine. Many regulatory authorities
grapple with that task and maintaining the public safety. In many cases it is a delicate balance. But we are far past the point that every
physician in the country should be overregulated and overtaxed based on conventional
wisdom because regulatory bodies are uncomfortable about their ability to
identify or discipline the few. If the ABMS or any
other medical authority wants evidence based safeguards for the public based on
examination performance – it is time to run the experiment and stop running a
public relations campaign to support the speculative ideas of a few.
George Dawson, MD, DFAPA
Subscribe to:
Posts (Atom)