Saturday, May 16, 2026

What Does ERISA Say About AI Guardrails?

 



 

A colleague sent me a news article this morning about a couple suing a major AI firm for advice given by their chatbot to their son resulting in a fatal overdose.  As a psychiatrist most of what I read about problematic AI comes in the form of AI hallucinating false medical references (1), AI induced psychosis in people who either use it excessively or who are predisposed, or AI facilitating its own use by excessive praise or obsequiousness.  In the latter case it can result is emotional attachment to the AI that of course is unwarranted.  I have also flagged a couple of cases that illustrate the problems when AI is applied to moral and political decision making.

I decided to do a little more research on the subject.  I was surprised to find a Wikipedia page titled Deaths Linked To Chatbots. Thirty-three deaths are listed not including the case I was investigating. The suggested pathways to violence generally include overuse, emotional attachment, and bad advice biased toward reinforcing irrational decisions.  The evidence contained on this page highlights a couple of concepts that might not be apparent to most people including the architects of AI.  The first is the importance of emotion in human decision making. This was articulated by Bechara in the past who demonstrated that if there is a disruption between emotional and cognitive systems in the human brain – even basic decisions become impossible.  Other disruptions in the same systems can lead to an array of emotional dysregulation and the associated irrational and often socially inappropriate decisions.  Second, emotional biases clearly affect decision making in the case of intact brains.  There is perhaps no better example than the current American political system installing a less competent government that is clearly not in support of the wants and needs of most Americans.

Secondly, humans can form intense attachments to inanimate objects that are unable to reciprocate.  The classic example is developmentally normal transitional objects (stuffed animals, toys, blankets).  Winnicott theorized that in infancy – this object is recognized as not part of the self or external reality.  It is a fantasized relationship that represents a future “illusion”(2).  According to Winnicott’s theory the transitional object loses meaning during normal development and becomes irrelevant.  Persistence into later stages may indicate a normative transition like object attachment during grieving, to a way to compensate for the lack of interpersonal attachments, to personality or psychopathology. 

Chatbots can be significant attachment figures and this is currently an area of study (4-6).  The area of human – digital object transference is also being explored (6) as well as the projection of human needs onto a digital object (8), and more complex models of human-machine connectedness (9).  This literature is referenced primarily to indicate that there is a lot that is not known about the array of human responses to interactions with these machines and what the possibilities are.

Apart from my previous concerns that machines lack consciousness and have demonstrated a lack of adequate moral decision-making there is always the question of programming and algorithms. Both of the features are the bane of most Internet users who find that their most mundane interests are often amplified to result in a barrage of advertisements and sales offers.  And then there is the army of misinformation bots spreading foreign and national political propaganda 24 hours a day.  None of that requires AI but is there any doubt that AI will make it worse and harder to detect?

It is no secret that the current AI explosion is a multitrillion dollar enterprise being run by a handful of men who have shown no interest in the environment, social equity, or human rights. They immediately aligned themselves with an autocratic government at the highest levels and so far, have had no regulation of their AI.  As a result, that AI is spewing out massive amounts of information that the average citizen is taking as legitimate if not some type of advanced advice. The complications of that advice include the deaths, environmental damage from the required power generation, and societal damage from unemployment.  There is additional damage based on inequity from wealth concentration.  The barrage of pro-AI hype in the media greatly exceeds any realistic discussion of the downsides.  The only clear benefit that most people see is their ability to sit at home and entertain themselves with a chatbot or see if an AI can do their homework or other projects.  The purported efficiency seems offset by a tremendous amount of time wasted.

At the minimum – in the case that started this post there is a stark contrast between human decision makers and AI.  In 40 years of practice – I never recommended kratom by itself or with alprazolam (Xanax) or Benadryl (diphenhydramine).  In fact, I spent a considerable amount of time getting people off of alprazolam and later kratom. But I am not unique in this – I don’t know of any physician who would make these recommendations.  But those recommendations form the basis for the AI lawsuit. 

That highlights the danger of the current hype that AI will replace physicians or the predictable studies that comparing AI to physicians shows that AI can be safely consulted.  There are even stories that AI is prescribing drugs in some settings without physician input.  The question of agency is never addressed and that seems like the basis for this lawsuit.  Corporations always seem to do good job of avoiding responsibility in healthcare.  The classic example is the Employee Retirement Income Security Act of 1974 (ERISA).  The pre-emption clause of ERISA means that in employer-sponsored health plan covered employees cannot bring state malpractice or negligence claims against their managed care organization (MCO) for injuries from denial of plan benefits, utilization review decisions, failure to use qualified physicians, or improper plan administration.   The reviewing physicians working for MCOs are also generally protected and the associated arguments are that utilization review is not the practice of medicine and/or the reviewers have no accountability/duty to the patient. Several studies have documented the patient harms related to this accountability gap and despite several attempts at amelioration it remains largely intact and a considerable source of financial success for managed care organizations.

The critical question is whether this kind of accountability gap will exist with AI.  It is easy to envision a scenario where AI is implemented to review charts and prescribe low risk medications like many online services do now.  Will AI eventually take the place of physician reviewers employed by MCOs? Will consumers and patients be led to believe that AI is making decisions that affect their medical care based on the best available information or in the interest of the corporation. Current statistics suggest that there are tens of millions of these decisions made every year.  AI can greatly increase that as well as the harassment factor if decisions are being appealed.

With all of the political talk about guardrails for AI – it is important to recognize that these guardrails need to exist at several levels.  Right now, it is not much of a stretch to say that AI is out there practicing medicine without a license. In the majority of cases like the initial example, the user does not know if the search result if strictly from medical literature or something else.  The user does not know if the AI is exercising the judgment of an average physician or in malpractice parlance using the community standard of care.  The user does not know if their psychology in terms of defense mechanisms or attachment style to inanimate objects or AI is being exploited.  The user does not know if the AI is just telling them what they want to hear.  And the user does not know if the AI is providing information in their best interest or the interest of corporations or the government.

I read a study doing research for this post and subjects were asked to rate the professionalism of the AI.  In my opinion the single-most significant determinant of professionalism for physicians is accountability and duty to their patients.  It fuels not only the immediate encounter but the concept of life long learning and service to patients. It is usually evident over time but only indirectly in the form of positive results and a positive relationship over time.  AI in its current form does not have it and I am not convinced that a society or culture that came up with ERISA can construct physician-like guardrails around medical AI.   

 

George Dawson, MD, DFAPA

 

1:  Topaz M, Roguin N, Gupta P, Zhang Z, Peltonen LM. Fabricated citations: an audit across 2·5 million biomedical papers. Lancet. 2026 May 9;407(10541):1779-1781. doi: 10.1016/S0140-6736(26)00603-3. PMID: 42107362.

2:  Kernberg OF.  Object relations theories and techniques.  In:  Textbook of Psychoanalysis, 2nd ed.  Person ES, Cooper AM, Gabbard GO, eds.   Washington DC: American Psychiatric Association Publishing, 2025: 57-75. 

3:  Bachar E, Canetti L, Galilee-Weisstub E, Kaplan-DeNour A, Shalev AY. Childhood vs. adolescence transitional object attachment, and its relation to mental health and parental bonding. Child Psychiatry Hum Dev. 1998 Spring;28(3):149-67. doi: 10.1023/a:1022881726177. PMID: 9540239.   

4:  Cheng N, Yu R. Measuring and understanding emotional attachment in human-AI relationships. Ergonomics. 2026 Feb 2:1-20. doi: 10.1080/00140139.2026.2622539. Epub ahead of print. PMID: 41622967. 

5:  Liu T, Lo TY, Wen KH, Sun Y, Wei ZQ. Pathways of long-term AI virtual companion app use on users' attachment emotions: a case study of Chinese users. Front Psychol. 2026 Jan 12;16:1687686. doi: 10.3389/fpsyg.2025.1687686. PMID: 41602682; PMCID: PMC12833267.

6:  Koles B, Nagy P. Digital object attachment. Curr Opin Psychol. 2021 Jun;39:60-65. doi: 10.1016/j.copsyc.2020.07.017. Epub 2020 Jul 22. PMID: 32823244.

7:  Holohan M, Fiske A. "Like I'm Talking to a Real Person": Exploring the Meaning of Transference for the Use and Design of AI-Based Applications in Psychotherapy. Front Psychol. 2021 Sep 27;12:720476. doi: 10.3389/fpsyg.2021.720476. PMID: 34646209; PMCID: PMC8502869.

8:  Saracini C, Cornejo-Plaza MI, Cippitani R. Techno-emotional projection in human-GenAI relationships: a psychological and ethical conceptual perspective. Front Psychol. 2025 Sep 29;16:1662206. doi: 10.3389/fpsyg.2025.1662206. PMID: 41089650; PMCID: PMC12515930.

9:  Boyd RL, Markowitz DM. Artificial Intelligence and the Psychology of Human Connection. Perspect Psychol Sci. 2026 Mar;21(2):192-220. doi: 10.1177/17456916251404394. Epub 2026 Jan 29. PMID: 41608879; PMCID: PMC12960742.


No comments:

Post a Comment