Showing posts with label evidence based medicine. Show all posts
Showing posts with label evidence based medicine. Show all posts

Sunday, December 16, 2018

Morning Report





I don't know if they still call it that or not - but back in the day when I was an intern Morning Report was a meeting of all of the admitting residents with the attendings or Chief of Internal Medicine.  The goal was to review the admissions from the previous night, the initial management, and the scientific and clinical basis for that management. Depending on where you trained, the relationship between house staff and attendings could be affiliative or antagonistic. In affiliative settings, the attendings would guide the residents in terms of management and the most current research that applied to the condition. In the antagonistic settings, the attendings would ask an endless series of questions until the resident presenting the case either fell silent or excelled.  It was extremely difficult to excel because the questions were often of the "guess what I am thinking" nature. The residents who I worked with were all hell bent on excelling.  After admitting 10 or 20 patients they would head to the library and try to pull the latest relevant research.  They may have only slept 30 minutes the night before but they were ready to match wits with the attendings in the morning.

Part of that process was discussing the relevant literature and references.  In those days there were often copies of the relevant research and beyond that seminar and research projects that focused on patient care. I still remember having to give seminars on gram negative bacterial meningitis and anaphylaxis.  One of my first patients had adenocarcinoma of unknown origin in his humerus and the attending wanted to know what I had read about it two days later.  I had a list of 20 references. All of that reading and research required going to a library and pulling the articles in those days.  There was no online access.  But even when there was - the process among attendings, residents, and medical students has not substantially changed.

I was more than a little shocked to hear that process referred to as "intuition based medicine" in a recent opinion piece in the New England Journal of Medicine (1).  In this article the authors seem to suggest that there was no evidence based medicine at all.  We were all just randomly moving about and hoping to accumulate enough relevant clinical experience over the years so that we could make intuitive decisions about patient care.  I have been critical of these weekly opinion pieces in the NEJM for some time, but this one seems to strike an all time low. Not only were the decisions 35 years ago based on the available research, but there were often clinical trials being conducted on active hospital services - something that rarely happens today now that most medicine is under corporate control.

Part of the author's premise here is that evidence-based medicine (EBM) was some kind of an advance over intuition-based medicine and now it is clear that it is not all that it is cracked up to be. That premise is clearly wrong because there was never any intuition based medicine before what they demarcate as the EBM period. Secondly, anyone trained in medicine in the last 40 years knew what the problems with EBM were from the outset - there would never be enough clinical trials of adequate size to include the real patients that people were seeing.  I didn't have to wait to read all of the negative Cochrane Collaboration studies saying this in their conclusions.  I knew this because of my training, especially training in how to research problems relevant to my patients. EBM was always a buzzword that seemed to indicate some hallowed process that the average physician was ignorant of.  That is only true if you completely devalue the training of physicians before the glory days of EBM.

The authors suggest that interpersonal medicine is what is now needed. In other words the relationship between the physician and patient (and caregivers) and their social context is relevant.  Specifically the influence the physician has on these folks.  Interpersonal medicine "requires recognition and codification of the skills that enable clinicians to effect change in their patients, and tools for realizing those skills systematically." They see it as the next phase in "expanding the knowledge base in patient care" extending EBM rather than rejecting it.  The focus  will be on social and behavioral aspects of care rather than just the biophysical. The obvious connection to biopsychosocial models will not be lost on psychiatrists.  That is straight out of both interpersonal psychotherapy (Sullivan, Klerman, Weissman, Rounsaville, Chevron) and the model itself by Engel.  Are the authors really suggesting that this was also not a focus in the past?

Every history and physical form or dictation that I ever had to complete contained a family history section and a social history section.  That was true if the patient was a medical-surgical patient or a psychiatric patient.  Suggesting that the interpersonal, social, and behavioral aspects of patient care have been omitted is revisionism that is as serious as the idea of intuition based medicine existing before EMB.

I don't understand why the authors just can't face the facts and acknowledge the serious problems with EBM and the reasons why it has not lived up to the hype.  There needs to be a physician there to figure out what it means and be an active intermediary to protect the patient against the shortfalls of both the treatment and the data. As far as interpersonal medicine goes that has been around as long as I have been practicing as well.  Patients do better with a primary care physician and seeing a physician who knows them and cares for them over time. They are more likely to take that physician's advice.  Contrary to managed care propaganda (from about the same era as EBM) current health care systems fragment care, make it unaffordable, and waste a huge amount of physician time taking them away from relationships with patients.

Their solution is that physicians can be taught to communicate with patients and then measured on patient outcomes.  This is basically a managed care process applied to less tangible outcomes than whether a particular medication is started. In other words, it is soft data that it is easier to blame physicians for.  In this section they mention that one of the author's works for Press Ganey - a company that markets communication modules to health care providers. I was actually the recipient of such a module that was intended to teach me how to introduce myself to patients. The last time I took that course was in an introductory course to patient interviewing in 1978.  I would not have passed the oral boards in psychiatry in 1988 if I did not know how to introduce myself to a patient.  And yet here I was in the 21st century taking a mandatory course on how to introduce myself after I have done it tens of thousands of times.  I guess I have passed the first step toward the new world of interpersonal medicine.  I have boldly stepped beyond evidence based medicine.   

I hope there is a lot of eye rolling and gasping going on as physicians read this opinion piece.  But I am also concerned that there is not. Do younger generations of physicians just accept this fiction as fact?  Do they really think that senior physicians are that clueless?  Are they all accepting a corporate model where what you learn in medical school is meaningless compared to a watered down corporate approach that contains a tiny fraction of what you know about the subject?

It is probably easier to accept all of this revisionist history if you never had to sit across from a dead serious attending at 7AM, present ten cases and the associated literature and then get quizzed on all of that during the next three hours of rounding on patients.
 



George Dawson, MD, DFAPA


References:

1: Chang S, Lee TH. Beyond Evidence-Based Medicine. N Engl J Med. 2018 Nov 22;379(21):
1983-1985. doi: 10.1056/NEJMp1806984. PubMed PMID: 30462934.


Graphic Credit:

That is the ghost of Milwaukee County General Hospital one of the teaching affiliates of the Medical College of Wisconsin.  It was apparently renamed Doyne Hospital long after I attended  medical school there.  It was demolished in 2001.  I shot this with 35mm Ektachrome walking to medical school one day. The medical school was on the other side of this massive hospital.






Sunday, February 7, 2016

Anecdotes Are Not Evidence And Other "Evidence-Based" Fairy Tales



A lot depends on the kind of anecdote that you are talking about.  When we talk about anecdotes in medicine that generally means a case or a series of cases.  In the era of "evidence based medicine" this is considered to be weak or non-existent evidence.  Even 30 years ago when I was rounding with a group of medicine or surgery residents: "That's anecdotal..." became a popular part of roundsmanship as a way to put down someone basing their opinion on a case or series of cases.  At the time, series of cases were still acceptable for publications in many mainstream medical journals.  Since then there has been and inexorable march toward evidence-based medicine and that evidence is invariably clinical trials or meta-analyses of clinical trials.  In some cases the meta-analyses can be interpreted in a number of ways including interpretations that are opposed to the interpretation of the author.  It is easy to do if a biologically heterogeneous condition is being studied.

There has been some literature supporting the anecdotal but it is fairly thin.   Aronson and Hauben considered the issue of drug safety.  They made the point that isolated adverse drug reactions are rarely studied at a larger epidemiological level following the initial observation.  They argued that they could establish criteria for a definitive adverse event based on a single observation.  In the case of the adverse drug event they said the following criteria could be considered "definitive":  extracellular or intracellular deposition of the drug, a specific anatomic location or pattern of injury, physiological dysfunction or direct tissue damage, or infection as a result of an infective agent due to contamination (1).  They provide specific examples using drugs and conclude: "anecdotes can, under the right circumstances, be of high quality and can serve as powerful evidence.  In some cases other evidence may be more useful than a randomized clinical trial.  And combining randomized trials with observational studies, or with observational studies and case series, can sometimes yield information that is not in clinical trials alone."  This is essentially the basis for post marketing surveillance by the Food and Drug Administration (FDA).  In that case, monitoring adverse events on a population wide basis, increases the likelihood of finding rare but serious adverse events compared with the the original trial.

Enkin and Jadad had a paper that also considered anecdotal evidence as opposed to formal research (2).   They briefly review the value of informal observation and the associated heuristics, but also point out that these same observations and heuristics can lead some people to adhere to irrational beliefs.  The values of formal research is a check on this process when patterns emerge at a larger scale.  Several experts have applied Bayesian analysis to single case results, to illustrate how pre-existing data can be applied to single cases analyses with a high degree of success.  Pauker and Kopelman (3) looked at a case of hypertension and when to consider a search for secondary causes - in this case pheochromocytoma.  They made the interesting observation that:

"Because the probability of pheochromocytoma in a patient with poorly controlled hypertension who
does not have weight loss, paroxysms of sweating, or palpitations is not precisely known, the clinician often makes an intuitive estimate.  But the heuristics (rules of thumb) we use to judge the likelihood of diseases such as pheochromocytoma may well produce a substantial overestimate, because of the salient features of the tumor, its therapeutic importance, and the intellectual attraction of making the diagnosis."   

They take the reader through a complex presentation of hypertension and likelihood ratios used to analyze it and conclude:

"Hoofbeats usually signal the presence of horses, but the judicious application of Bayes' rule can help prevent clinicians from being trampled by a stampeding herd that occasionally includes a zebra."

In other words, by using Bayes rule,  you won't subject patients with common conditions to excessive (and risky) testing in order to not miss an uncommon condition and you won't miss the uncommon condition.  Looking at the data that supports or refutes that condition will make it clear, if you have an idea about the larger probabilities.

How does all of this apply to psychiatry?  Consider a few vignettes:

Case 1:  A psychiatric consultant is asked to assess a patient by a medicine team.  The patient is a 42 year old man who just underwent cardiac angiography.  The angiogram was negative for coronary artery disease and no intervention was necessary.  Shortly afterwards the patient becomes acutely agitated and the intern notices that bipolar disorder is listed in the electronic health record.  He calls the consultant and suggests an urgent consult and possible transfer to psychiatry for treatment of an acute manic episode.  The consultant arrives to find a man sitting up in bed, appearing very angry and tearful and shaking his head from side to side.

Case 2:  A 62 year old woman is seen in an outpatient clinic for treatment resistant depression.  She has been depressed for 20 years and had had a significant number of medication changes before being referred to a psychiatric clinic.  All of the medications were prescribed by her primary care physician.  She gives the history that she gets an annual physical exam done each year and they have all been negative.  Except for fatigue and some lightheadedness, her review of systems is all negative.  She is taking lisinopril for hypertension, but has no history of cardiac disease.  She has had electrocardiograms (ECGs) in the past but no other cardiac testing.  The psychiatrist discusses the possibility of a tricyclic antidepressant.

Case 3:  An inpatient psychiatrist has just admitted a 46 year old woman who is concerned about the FBI.  She has been working and functioning at a high level until about a month ago when she started to notice red automobiles coming past her cul de sac at a regular frequency.  She remembered getting into an argument at work about American military interventions in the Middle East.  She made it very clear that she did not support this policy.  She started to make the connection between the red automobiles and the argument at work.  She concluded that an employee must have "turned her in" to the federal authorities.  These vehicles were probably the FBI or Homeland Security.  She noticed clicking noises on her iPhone and knew it had been cloned.  She stopped going into work and sat in the dark with the lights out.  She reasoned it would be easier to spot somebody breaking into her home by the trail of light they would leave.   There is absolutely no evidence of a mood disorder, substance use disorder, or family history of psychiatric problems.  She agrees to testing and all brain imaging and laboratory studies are normal.  She agrees to a trial of low dose antipsychotic medication, but does not tolerate 3 different medications at low doses.

These are just a very few examples of the types of clinical problems that psychiatrists encounter on a daily basis that require some level of analysis and intervention.  A psychiatrist over the course of a career is encountering 20 or 30 of these scenarios a day and ends up making tens of thousands of these decisions.  What is the "evidence basis" of these decisions?  There really is nothing beyond anecdotes with the availability of various strengths of confirmation.  What kinds of evidence would the evidence based crowd like to see?  In Case 1, a study that looked at behavioral disturbances after cardiac catheterization would be necessary, although Bayes would suggest the likelihood of that occurring would be very low.  In Case 2, a large trial of treatment approaches to 62 year old women with depression and fatigue would be useful.  I suppose another trial of what kinds of laboratory testing might be necessary although much of the literature suggests that fatigue is very nonspecific.  In most cases where patients are being seen in primary care - fatigue and depression are indistinguishable.  Extensive testing, even for newer inflammatory markers yields very little.  Further on the negative side,  evidence-based authorities suggest that a routine physical examination and screening tests really adds nothing to disease prevention or long term well being.  Case 3 is more interesting.  Here we have a case of paranoid psychosis that cannot be treated due to the patient experiencing intolerable side effects to the usual medication.  Every practicing psychiatrist knows that a significant number of people can't take entire classes of medications.  Here we clearly need a clinical trial of 40 year old women with paranoid psychoses who could not tolerate low dose antipsychotic medication.

By now my point should be obvious.  None of the suggested trials exist and they never will.  It is very difficult to match specific patients and their problems to clinical trials.  Some of the clinical occurrences are so rare (agitation after angiography for example) that it is very doubtful that enough subjects could be recruited and enrolled in a timely manner.  And there is the expense.  There are very few sources that would fund such a study and these days, very few practice environments or clinical researchers that would be interested in the work.  Practice environments these days are practically all managed care environments where physician employees spend much of their time administrative work, the company views clinical data as proprietary, and research is frequently focused on advertising and marketing rather than answering useful clinical questions.

That brings us to the larger story of what the "evidence" is?  The anecdotes that everyone seems to complain about are really the foundation of clinical methods.  Training in medicine is required to experience these anecdotes as patterns to be recognized and classified for future work.  They are much more than defective heuristics.  How does that work?  Consider case #1.  The psychiatric consultant in this case sees an agitated and tearful man who appears to be in distress.  The medicine team sees a diagnosis of bipolar disorder and concludes the patient is having an acute episode of mood disturbance.  The consultant quickly determines the fact that the changes are acute and rejects the medical team's hypothesis that this is acute mania.  After about 5 questions he realizes that the patient is unable to answer, pulls a pen out of his pocket, and asks the patient to name of the pen.  When he is not able to do this, he performs a neurological exam and determines the patient has right arm weakness and hyperreflexia.  An MRI scan confirms the area of an embolic stroke and the patient is transferred to neurology rather than psychiatry.  The entire diagnostic process is based on the past anecdotal experience of diagnosing and treating neurological patients as a medical student, intern, and throughout his career.  Not to labor the point (too much) - it is not based on a clinical trial or a Cochrane review.                

The idea that the practice of medicine comes down to a collection of clinical trials that that are broken down according to a proprietary boilerplate and generally conclude that the quality of most studies is low and therefore there is not much to draw conclusions on is absurd.  Trusting meta-analyses for answers is equally problematic.  You might hope for some prior probability estimates for Bayesian analysis.  But you will find very little in the endless debate (4) about whether or not antidepressants work or they are dangerous.  You will find nothing when these studies are in the hands of journalists who are not schooled in how to practice medicine and know nothing about treating biologically heterogeneous populations and unique individuals.  No matter how many times the evidence based crowd tells you that you are treating the herd - a physician knows that if they treat the herd - it is one person at a time.  They also know that screening the herd can lead to more problems than solutions.

Treating the herd would not allow you to make a diagnosis of complete heart block and immediate referral to Cardiology for pacemaker placement (Case 2) or psychotherapy with no medications at all and eventual recovery (Case 3).  If you accept the results of many clinical trials or their interpretation by journalists - you might conclude that your chances of recovery from a complex disorder are not better than chance.  There is nothing further from the truth.

That is why most of us practice medicine.



George Dawson, MD, DLFAPA


References:

1:   Aronson JK, Hauben M. Anecdotes that provide definitive evidence. BMJ. 2006 Dec 16;333(7581):1267-9. Review. PubMed PMID: 17170419; PubMed Central PMCID: PMC1702478.

2:  Enkin MW, Jadad AR. Using anecdotal information in evidence-based health care:  heresy or necessity? Ann Oncol. 1998 Sep;9(9):963-6. Review. PubMed PMID: 9818068.

3:  Pauker SG, Kopelman RI. Interpreting hoofbeats: can Bayes help clear the haze? N Engl J Med. 1992 Oct 1;327(14):1009-13. PubMed PMID: 1298225.

4:  Scott Alexander.  SSRIS: MUCH MORE THAN YOU WANTED TO KNOW.  Slate Star Codex posted on July 7, 2014.



Attribution:

Photo at the top is Copyright Gudkov Andrey and downloaded from Shutterstock per their Standard License Agreement on 2/3/2016.

Monday, November 18, 2013

Evidence based Maintenance of Certification – A Reply to ABMS

The politics of regulating physicians is no different than politics in general and that typically has nothing to do with scientific evidence.  From the outset it was apparent that some people had the idea that general standardized exams with high pass rates and patient report exercises would somehow keep all of the specialists in a particular field up to speed.  That assumes they were not up to speed in the first place.

As a member of a professional organization embroiled in this controversy it has give me a front row seat to the problems with physician regulation and how things are never quite what they seem to be.  From the outset there was scant evidence that recertification exams were necessary and with the exams no evidence that I am aware of that they have accomplished anything.  The American Board of Medical Specialties (ABMS) actually has a page on their web site devoted to what evidence exists and I encourage anyone to go there and find any scientific evidence that supports current MOC much less the approaching freight train of Maintenance of Licensure or linking MOC to annual relicensing by state medical boards.  Feel free to add that evidence to the comments section for this post.
Prior to this idea there were several specialty organizations that had their own programs consisting of educational materials that were self study courses that could be completed on specific topics relevant to the specialist every year.  A formal proctored examination and all of the examination fees that involves was not necessary.  The course topics were developed by consensus of the specialists in the field.  A couple of years ago I watched a CME course presentation by a member of the ABMS who pointed out that three specialty boards (of a total of 24) wanted to continue to use this method for relicensing and recertification.  They were denied that ability to do that because the ABMS has a rule that all of the Boards have to use the same procedure that the majority vote on.  The problem was that very few of the physicians regulated by these Boards were aware of the options or even the fact that there would be a move by the ABMS for a complicated recertification scheme and that they would also eventually push for it to become part of relicensing in many states.
If the ABMS is really interested in evidence based practice, the options to me are very clear.  They currently have no proof that their recertification process is much more than a public relations initiative.  Here is my proposal.  Do an experiment where one half of the specialists to be examined that year complete a self study course in the relevant topics for that year.  That can be designated the experimental group.  The other half of the specialists receive no intervention other than self study on their own for whatever they think might be relevant.  Test them all on the topics selected for the self study group and then compare their test scores.   See who does better on the test.  Secondary endpoints could be developed to review the practices of each group and determine whether there are any substantial differences on secondary measures that are thought to be relevant in the tested areas.
Until this straightforward experiment is done, the current plan and policies of the ABMS are all speculative and appear to be based upon what has been called conventional wisdom.  Conventional wisdom appears to be right because all of the contrary evidence is ignored.  There is no scientific basis for conventional wisdom and it falls apart under scrutiny.  Physicians in America are currently the most overregulated workers in the world.  The rationale for these regulations is frequently based on needing to weed out the few who are incompetent, unethical, or physically or mentally unable to practice medicine.  Many regulatory authorities grapple with that task and maintaining the public safety.  In many cases it is a delicate balance.  But we are far past the point that every physician in the country should be overregulated and overtaxed based on conventional wisdom because regulatory bodies are uncomfortable about their ability to identify or discipline the few.   If the ABMS or any other medical authority wants evidence based safeguards for the public based on examination performance – it is time to run the experiment and stop running a public relations campaign to support the speculative ideas of a few.
George Dawson, MD, DFAPA    

Thursday, October 3, 2013

Psychotherapy Has No Image Problem - Psychotherapy Has a Managed Care Problem

There was an opinion piece in the New York Times a few days ago entitled "Psychotherapy's Image Problem".  The author goes on to suggest that despite empirical evidence of effectiveness and a recent study showing a patient preference for psychotherapy - it appears to be in decline.  He jumps to the conclusion that this is due to an image problem, namely that primary care physicians, insurers, and therapists are unaware of the empirical data.  That leads to a lack of referrals and for some therapists use of therapies that are not evidence based - further degrading the field.  He implicates Big Pharma in promoting the image of medications and that the evidence base for medication has been marketed better.  He implicates the American Psychiatric Association in promoting medications and suggests that the guidelines are biased against psychotherapies.

I am surprised how much discussion this post has received as though the contention of the author is accurate.  Psychotherapy has no image problem as evidenced by one the references he cites about the fact that most patients prefer it.  It wasn't that long ago that the famous psychotherapy journal Consumer Reports surveyed people and concluded that not only were psychotherapy services preferred, they were found as tremendously helpful by the majority of people who used them.  That study was not scientifically rigorous but certainly was effective from a public relations standpoint.

The idea that psychiatry is promoting drugs over psychotherapy seems erroneous to me.  The APA Guidelines certainly suggest psychotherapy as first line treatments and treatments that are part of selecting a therapeutic approach to the patient's problems.   Psychopharmacology is also covered and in many cases there are significant qualifications with the psychopharmacology. Further there are a number of psychiatrists who lecture around the country who are strong advocates for what are primarily psychotherapeutic approaches to significant disorders like borderline personality disorder and obsessive compulsive disorder.  Psychiatrists have also been leaders in the field of psychotherapy of severe psychiatric disorders and have been actively involved in that field for decades.   Even psychopharmacology seminars include decision points for psychotherapy either as an alternate modality to pharmacological approaches or a complementary one.  What is omitted from the arguments against psychiatry is that many payers do not reimburse psychiatrists for doing psychotherapy.

The author's action plan to politically promote the idea that psychotherapy is evidence based and deserves more utilization is doomed to fail because the premises of his argument are inaccurate.  There is no image problem based on psychiatry - if anything the image is enhanced.  There is definitely a lack of knowledge about psychotherapy by primary care physicians and it is likely that is a permanent deficit.  Primary care physicians don't have the time, energy, or inclination to learn about psychotherapy.  In many cases they have therapists in their clinic and just refer any potential mental health problems to those therapists.  In other cases, the health plan that primary care physicians work for has an algorithm that tells them to give the patient a 2 minute depression rating scale and prescribe them an antidepressant or an anxiolytic.

And that is the real problem here.  Psychotherapists just like psychiatrists are completely marginalized by managed care and business tactics.  If you are a managed care company, why worry about insisting that therapists send you detailed treatment plans and notes every 5 visits for a maximum of 20 visits per year when you can just eliminate them and suggest that you are providing high quality services for depression and anxiety by following rating scale scores and having your primary care physicians prescribe antidepressants?.  The primary care physicians don't even have to worry if the diagnosis is accurate anymore.  The PHQ-9 score IS the diagnosis.  Managed care tactics have decimated psychiatric services and psychotherapy for the last 20 years.

It has nothing to do with the image of psychotherapy.  It has to do with big business and their friends in government rolling over professionals and claiming that they know more than those professionals.  If you really want evidence based - they can make up a lot of it.  Like the equation:

rating scale + antidepressants = quality

If I am right about the real cause of the decreased provision of psychotherapy, the best political strategy is to expose managed care and remember that current politicians and at least one federal agency are strong supporters of managed care.

George Dawson, MD, DFAPA

Brandon A. Guadiano.  Psychotherapy's Image Problem.  New York Times September 29, 2013.

Friday, April 20, 2012

The $40 Call


One of  the local HMOs has been heavily advertising their nurse practitioner diagnostic line. It caught my attention because the radio ad was focused on wood tick season, and it suggested the diagnosis and treatment of Lyme disease could be rapidly made over the phone and that it could require e-mailing in a picture of the rash or tick.

I used to teach a course in medical diagnostics and diagnostic reasoning and one of the examples I used in that course involved expert diagnosis of rashes from photographs.  An important part of medical diagnostics is pattern recognition. There is probably no better example than the diagnosis of rashes and it should not come as a surprise that experts in rashes or dermatologists do a much better than physicians who are not experts. That is true both in terms of making the actual diagnosis and in the total amount of time that it takes to arrive at that diagnosis.

When I heard about this new service to diagnose Lyme disease based on photographs I went to Medline to see if I could find anything written about it. Managed care organizations and HMOs frequently advertise the fact that they are evidence-based organizations. I really cannot find any studies done on using the Internet or telephone consultation for the diagnosis of rashes or Lyme disease.

I think that this new service has implications for how the business models are impacting the practice of medicine. With all the talk about transparency it would be useful for the public to know the false positive and false negative rates for this diagnostic service. That certainly would be consistent with the literature on the misdiagnosis of Lyme disease.

From a purely economic perspective, it is interesting that the cash charge for this service is on par with the most common cash charge for seeing a psychiatrist in person. As I have previously posted, there is a wide range for the psychiatric charge and it is conceivable that this telephone service generates considerably more cash than a psychiatrist does sitting in a clinic, seeing patients, and doing all of the associated administrative work.

The next logical step for this telephone service is to have patient's complete a number of rating scales and be treated for depression. Whether it is Lyme disease or depression the diagnosticians with the greatest pattern matching and pattern completion capabilities are taken out of the loop.

George Dawson, MD, DFAPA