Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Saturday, January 10, 2026

The Problems With AI Are More Readily Apparent

 



Note:  This essay is written by an old human brain that was writing essays and poetry decades before there was an Internet. No AI was used to create this essay.

Artificial Intelligence (AI) hype permeates every aspect of modern life.  We see daily predictions of what group of workers will be replaced and how AI is going to cure every human disease. It is no accident that the main promoters of AI will make significant profits from it.  Vast amounts of money are being invested and gambled on AI on Wall Street.  Educators are concerned that students are using it to write the essays that took us hours or days to write in college – in just a few minutes.  That application leads to the obvious questions about what will be the end product of college if all the serious, critical, and creative thought has been relegated to a machine. 

Apart from sheer data scraping and synthesis of what amounts to search inquires - AI seems to be imbued with magical qualities that probably do not exist. It is like the science fiction of the 20th century – alien beings superior to humans in every way because they lack that well know weakness – emotion.  If only we had a purely rational process life would be much better.  The current promoters tend to describe this collective AI as making life better for all of us and minimize any risks.  The suggested risks also come from the sci-fi genre in the form of Terminator type movies where the machines decide it is in their best interest to eliminate humans and run the planet on their own. There are the usual failed programs to preserve human life at all costs or to destroy humans only if they are carrying weapons. 

But AI thought experiments do not require even that level of complexity to create massive problems.  Consider Bostrom’s well known example of a paper clip making machine run by AI (1).  In that example PaperClip AI is charged with the task of maximizing paperclip production.  In a case of infrastructure profusion it “proceeds by converting the Earth and increasingly large chunks of the observable universe into paperclips.”   He gives several reasons why obvious fixes like setting a production limit or a production interval would probably not work and leads to infrastructure profusion that would be catastrophic.  The current limitation on this kind of AI is that it does not have control over acquiring all these resources.  It also lacks the ability to perceive how correct production in an fully autonomous mode.   Bostrom also adds characteristics to the AI – like motivation and reinforcement that seem to go beyond the usual conceptualizations.  Where would they come from?  If we are not thinking about programmed algorithms what kind of intelligence has its own built-in reinforcement and motivation schedule independent of the environment?  After all – the task of producing just enough paperclips without consuming all of the resources on the planet is an easy enough task for a human manager to accomplish.  Bostrom suggests that it is an intuitive task for humans but not so much for machines.

A couple of events came to my attention in the past week that make the limitations of current AI even more obvious – especially contrasted with the hype.  The first is the case of a high-profile celebrity who has lodged a complaint against the X(formerly Twitter) AI called Grok.  In it, she points out that the AI has been generating nude or semi-nude photos of her adult and teen-age photos. I heard an interview where she mentions that this practice is widespread and that other women have contacted her about the same problem.  This practice is in direct contrast with the X site use policy saying that users doing this will be banned and referred for prosecution. She has not been successful in getting the photos stopped and removed.

The second event was a Bill Gates clip where he points out that what he considers a sensitive measure of progress – mortality in children less than 5 years of age - has taken a turn for the worse.  He predicts the world descending into a Dark Age if we are not able to reverse this change. That new release comes in the context of Gates predicting that AI will replace physicians, teachers, and most humans in the workplace in the next 10 years.  Of course he was promoting a book at the time.  In that same clip he was optimistic about the effects of AI on health and the climate despite the massive toll that AI creates on power generating resources to the point that some companies are building their own municipal sized power plants. 

What are the obvious disconnects in these cases?  In the first, AI clearly has no inherent moral decision making at this point.  That function is still relegated to humans and given what is being described here that is far from perfect.  In this case the complainant has some knowledge of the social media industry and said that she thought that any engineer could correct this problem quickly.  I am not a computer engineer so I am speculating that would take a restrictive or algorithmic program. But what about the true deficit here?  It could easily be seen as a basic deficit in empathy and an inability to apply moral judgment and its determinants to what are basic human questions.  Should anyone be displaying nude photos of you without your consent?  Should identified nude photos of children ever be displayed?  AI in its current iteration on X is clearly not able to answer these questions in an acceptable way and act accordingly.

The second contrast is only slightly more subtle. Conflict of interest is obvious but Gates seems to not recognize his described descent into the Dark Ages based on an increasing death rate in children 5 years of age and younger depends almost entirely on human decision making.   It runs counter to the decades of medical human decision making that he suggests will be replaced. Basic inexpensive life-saving medical care has been eliminated by the Trump administration.  This has led to the predictions that hundreds of thousands if not millions of people will die as a direct result. Is AI going to replace politicians?  What would be the result if it did?  Cancelling all these humanitarian programs is a more complicated decision than not publishing nude photos of non-consenting adults or any children.  It is a marginally rational ideological decision.  Is the AI of different politicians going to reflect their marginally rational ideology or are we supposed to trust this political decision to a machine with unknown biases or ideologies?  How will that AI decision making be optimized for moral and political decision making?  Will AI be able to shut down the longstanding human tendency to base decisions on power over morality?  If politicians allow AI to replace large numbers of workers, will it also be able to replace large numbers of politicians and managers?  It can easily be argued that the decisions of knowledge workers are more complex than that of managers.                                    

A key human factor is empathy and it requires emotional experience.  You get a hint of that in the best technical description of empathy I have seen from Sims (2):

“Empathy is achieved by precise, insightful, persistent, and knowledgeable questioning until the doctor is able to give an account of the patient’s subjective experience that the patient recognizes as his own… Throughout the process, success depends upon the capacity of the doctor as a human being to experience something like the internal experience of the other person, the patient: it is not an assessment that could be carried out by a microphone and a computer.  It depends absolutely upon the shared capacity of both the doctor and patient for human experience and feeling.”  (p. 3)

The basic problem that machines have is that they are not conscious at the most basic level.  They have no experience.  In consciousness research, early thinking was that a machine would be conscious if a human communicating with it experienced it like another human being.  That was called the Turing Test after the scientist who proposed it.  In the case of computerized chess – there was a time several years ago when the machine was experienced like it was making the chess moves of a human being.  The headlines asked “has the Turing Test been passed?” It turns out the test was far too easy.  There are after all a finite number of chess moves and plenty of data about the probabilities of each move made by top players. That can all be handled by number crunching.

What happens when it comes to real human decisions that require the experience?  And by experience I mean the event with all of the integrated emotions.  Is AI likely to recognize the horror of finding your nude photos on the Internet,  or scammers trying to blackmail you over a fictional event, or the severity of your anxiety from being harassed at work, or the devastating thoughts associated with genocide or nuclear war?  Machines have no conscious experience.  Without that experience how can we expect a machine to understand why the sexual exploitation of children and adults is immoral, wrong, or even anxiety producing?

It is also naïve to think that AI will produce ideal decisions.  Today’s iteration may be the crudest form but everyone is aware of the hallucinations. The more correct term from psychiatry is confabulation or making things up as a response to a specific question.  When you consider that today’s AI is mostly a more sophisticated search engine there really is no reason for it.  As an example, I have asked for an academic reference in a certain citation style and will get it.  When I research that reference – I find that it does not exist.  I have had to expend considerable time finding the original journal and looking for the reference in that edition to confirm it is non-existent.  Explanations for these phenomena extend to poor data quality, poor models, bad prompts, and flawed design.  The problem is acknowledged and many AI sites warn about the hallucinations.  A more subtle problem at this point is how AI will be manipulated by whatever business, government, or political body that controls it. That problem was pointed out in a book written about a decade ago (3) illustrating how algorithms applied to individual data can reinforce human biases about race and poverty and promote inequality. I have seen no good explanations about why AI would be any different and in fact it probably makes the financial system less secure.

As I keep posting about how your brain and mind work – please keep in mind it is a very sophisticated and complex process. It is much more than looking at every available reference and synthesizing an answer.  There are the required experiential, emotional, cognitive, value-based, and moral components.  Superintelligence these days implies that at some point machines will always have the correct and best answer.  That certainly does not exist now and I have a question about whether it will in the future. It is a good time to take a more realistic view of AI and construct some guardrails.     

      

George Dawson, MD, DFAPA

 

 

References:

1:  Bostrom N.  Superintelligence: Paths, Dangers, Strategies.  Oxford, England: Oxford University Press, 2014: 150-152. 

2:  Sims A.  Symptoms in the Mind: An Introduction to Descriptive Pathology.  London, England: Elsevier Limited, 2003: 3.

3:  O’Neil C.  Weapons of Math Destruction. New York City, USA; Crown Books, 2016

 

Friday, April 11, 2025

The Tech Bros Want to Replace Your Teachers and Doctors

 The Matrix


 

Just last week I was contacted by an acquaintance about Viagra.  He was not a physician and got the prescription through an online business that specializes in dispensing hair loss, erectile dysfunction, anxiety, and depression medications. When I see these businesses advertising that combination of medications it always piques my interest. Why these medications? Comparing them with the most prescribed drugs in the US – 3 antidepressants are in the top 20 - sertraline, trazodone, and escitalopram.  They can double for anxiety medications.  Viagra (sildenafil) is 157 and Cialis (tadalafil) is 172.  Finasteride can be used for both hair loss and prostatic hypertrophy and it is number 72.  Topical minoxidil is not on the list. It is not like there is a shortage of prescriptions for any reason.

My contact person had talked with one of the online prescribers and was not sure about how he was supposed to take the medication. Should he take it every day or just on the days he was going to have intercourse?  Reading the prescription label and the information he was sent was not helpful.

More of these online prescribing services seem to be advertising every day.  They promise cost effectiveness, the same medications that your physician would prescribe, ease or use, and no embarrassment.  How many times have you been in line at your clinic or pharmacy and had a staff person belt out some information about you that you preferred stay private?  That line on the floor separating you from the other patients is not enough distance to muffle a receptionist shouting through plexiglass.  The online service promises to send you the medication in a plain brown wrapper. 

The real downsides to this new relationship are never mentioned. No access to your records to check for contraindications, drug-drug interactions, pre-existing medical conditions, the status of your liver and kidney function, or allergies. No access to your physician who may know you so well that they can say if taking a new medication would be advisable or not. No detailed discussions of risks, potential benefits, and unknowns. For me that discussion has taken longer than most of the telemedicine visits I have heard about.  And most importantly – no access to somebody who knows your situation if something goes wrong.

There is a real issue about how much information these rapid online prescribers keep on file and what it is used for.  Do they list your major medical conditions?  Does that lead to marketing? Does that lead to data mining to develop sufficiently large programs to make more money off you?  Recall that wherever your data is on the Internet, somebody is trying to profit from it.

That brings me to a stark conclusion about capitalism that I discovered too late in life. Growing up in the US, you are sold on the idea that capitalism and democracy are the mainstays of the country.  We are special because of both and we do both better than anyone else in the world.  The wealthy are idealized and everyone aspires to be wealthy.  If you can't get wealthy maximizing your material possessions seems to be a substitute.

American products are good because our environment producers entrepreneurs and competition among entrepreneurs produces superior products.  Think about that for a second.  The entrepreneur gets all the credit.  Forget about all of the science and engineering behind any product.  The faceless people laboring behind the scenes are hardly ever mentioned. If you are industrious enough, you might be able to find out who holds the patents but in the end they are all property of a large company.  And that company is there for one reason – to make as much money as possible.

In a service industry like medicine corporate profits were initially hard to come by because it was a cottage industry of private physicians.  Even as the corporate takeover began in the 1980s, physicians resisted to some extent as a powerful mediating class between corporate interests and the interests of physicians and patients. The end run around that physician mediation was hiring them as employees.  Initially corporations proposed that they were going to make primary care more accessible and minimize specialists.  In the end that was merely a tactic and they acquired specialty care as well as primary care.  Today most physicians are employees and have minimal input to their practice environment.  They are essentially told by middle managers how to practice medicine.  They work by default for companies like managed care companies and pharmacy benefit managers that waste physician time to rubber stamp their rationing procedures. 

The profits from the corporate takeover of medicine are high.  It is after all a recipe for making money.  There is a stable subscriber base fearful of medical bankruptcy and the corporation can decide how much of those funds it wants to spend. In thinking of new ways to make more money, telemedicine is the latest innovation. Convenience is a selling point. It has been used for decades to reach people in rural areas who would have a hard time travelling long distances to clinics.  But the current model is more like Amazon online shopping.  If you have condition x, y, or z – contact us and we will get you a prescription. Better yet, let’s take the pharmacy middle man out of the picture and prescribe and sell you the medication at the same time.    

A recent commentary in the NEJM pointed out the potential problems of the new relationship between pharmaceutical companies and telehealth firms (1). It is as easy to imagine as the following thought experiment.  Suppose you are watching a direct-to-consumer ad about a weight loss drug.  You go to the suggested web site where it tells you to make a telehealth appointment the same day for a nominal fee. One study showed that 90% of patients referred through this sequence got a prescription for the advertised drug.  The pharmacoepidemiology, quality of care, and legal ramifications of these arrangements are unknown.  The scrutiny is nonexistent compared with the claims that physicians were being influenced for decades by free lunches.  That matches my suspicion that the physician conflict of interest hype was more a political tactic than reality to suppress any objections to the political and corporate takeover of medicine.  

That brings me to the Bill Gates (2) comment.  Expectedly he is an unabashed promoter of computer technology and the latest version – artificial intelligence or AI.  His thesis is that AI will commoditize intelligence to the point that humans will not be necessary for most things including teaching and medicine. No mention of the conflict of interest.  The company he founded – Microsoft is currently heavily marketing computers with an early version of AI. A couple of years ago they also changed to a license for life model.  In other words when you buy a Microsoft computer or software package – you no longer own it outright.  You must pay a monthly licensing fee if you use it or if they decide not to support your computer any more – you must upgrade it to continue paying monthly fees for a long as you use your new computer.  Or until they tell you again that you have to buy a new one.  Even though intelligence is “free” Microsoft and all of the other major tech companies are not really giving it away – they have a recipe for making money off of you for the rest of your life.   

There is a reason that doctors don’t know much about business or politics. Both are highly corrupting influences. Medicine is a serious profession that is squarely focused on mastering a large volume of information and technical skill and keeping that current. Businesses on the other hand are focused on every possible way they can get your money and they are very good at it. If it comes down to an AI program providing medical care that is all you really need to know.

 

George Dawson, MD, DFAPA

 

References:

1: Fuse Brown EC, Wouters OJ, Mehrotra A. Partnerships between Pharmaceutical and Telehealth Companies - Increasing Access or Driving Inappropriate Prescribing? N Engl J Med. 2025 Mar 27;392(12):1148-1151. doi: 10.1056/NEJMp2500379. Epub 2025 Mar 22. PMID: 40126465.

2:  Richards B.  Bill Gates Says AI Will Replace Doctors, Teachers and More in Next 10 Years, Making Humans Unnecessary 'for Most Things'.  People Magazine March 29, 2025.  https://people.com/bill-gates-ai-will-replace-doctors-teachers-in-next-10-years-11705615

 

Graphic Credit:

Click on the graphic directly for full information on the Wikimedia Commons web site including CC license.  It is used unaltered here. 

 

 


Monday, November 24, 2014

Will The AI Apocalypse Be Worse Than Customer Service?



I am a survivalist and make no excuses for it.  I have posted my experiences in the cold weather and nearly freezing to death.  I am sure that is part of what makes a survivalist.  That combined with an early recognition that men often don't make rational decisions.   They are capable of making irrational decisions on a grand scale.  I was in grade school during the Cuban Missile Crisis.  Being in a small town, we escaped all of the duck and cover exercises that kids in the big city went through.  But paranoia about the Russians, nuclear war, and radioactive fallout was always there.  I worked for the town library in the 1970s and found out it had been the local nuclear fallout shelter.  I spent days clearing out steel 30 gallon drums that were supposed to double as water and waste containers in a fallout emergency.  In those days before atmospheric nuclear tests were banned, I can still recall a radioactive cloud passing over our town.  Experts from the state university were on television talking about Strontium-90 in the fallout and how that could end up in milk products.  My grandfather picked up on that and referred to it as "Strawberry-90".  He was somewhat of a radical, predicting that there was going to be a "revolution" at some point as the ultimate solution to a corrupt government.  Survivalism may have genetic determinants.

I was surprised when Stephen Hawking came out earlier this year and said that artificial intelligence (AI) represented a threat to humans.  I have seen all of the Terminator films and the Sarah Connor Chronicles.  As expected, any tales of a band of zealots surviving against all odds appeals to me.  But then it seemed that this was more than a cultural and artistic effort.  One of the arguments by Bostrom suggests that the survival of all of the animals on the planet depends on the animal with the highest intelligence - homo sapiens.  If a machine intelligence was developed one day that surpassed human intelligence it would follow that the fate of humans would depend on that machine intelligence.  There are competing arguments out there that suggest a model where the AI interests and human interests would compete politically.  Can you imagine how humans would fare in our current political systems?  A lot of the experts suggest that we won't have to imagine battling robots in human form and that makes sense.  It is clear that there are thousands of cyber attacks against our infrastructure every day.  Imagine what a concentrated AI presence unencumbered by sociopathy or patriotism could do?

Imagining the battlefield of the future scenes from any Terminator film or same-themed video game, I decided this morning that you don't need a high tech approach to wreak havoc among the populace and drain their resources.  You only need Customer Service.  The concept needs to be refined to modern customer service.  Even in the early days of the Internet, you could talk to a fellow human and they would hang in there with you until the problem was solved.  I can recall calling Gateway Computers for an out-of-the-box problem back in the 1990s.   The technical assistance rep and I completely disassembled and reassembled my PC over the next 2 hours.  And the end result was that it worked perfectly for the next 5 years.  I doubt that anything remotely that heroic happens today.

Twenty one days ago I downloaded graphics software from Amazon.   I am an Amazon Prime customer and order just about everything from them.  I am not a stockholder and my only interest is in getting things that nobody else stocks as soon as possible.  I had previously downloaded software from them and everything went well.  This time, I got an activation code and no serial number.  I complained to customer service and got an e-mail saying we will give you your money back but for the serial number problem you need to contact the manufacturer.  To back up a minute, I have no idea how I got that e-mail through to Amazon and could not replicate what I did in a hundred tries.  The obstructionist beauty that underlies all telephone queues and Internet sites is that it is very clear that they are not really designed to get you through to anyone.  It is a maze of dead ends and non answers.  At many of the dead ends you are polled: "Was this page helpful?".  So far I have not found a single page that was.

The dead ends at the computer graphics software site were even more formidable.  In order to contact customer service I had to set up an account.  After doing that I needed a serial number.   Of course that was my question in the first place.  How can I ask about getting a serial number when I need a serial number to ask the question?  It seemed like the ultimate dead end.  Amazon did send me a customer service number for the software company.  This number was not available on the company's web site.  In calling the number, their queue provided 4 options none of which applied to me.  It gave options for order numbers that started with different numbers and I had an Amazon number that did not fit any of the choices.  Just like my previous adventure in medical diagnostic queues - I picked one.  A scratchy recording of bad electronic music started playing.  It was interrupted every minute by a worse electronic voice telling me how important my call was and how I would be forwarded to a customer service rep.  That went on for half an hour and then the voice said:  "We are sorry but there is no one here to take your call.  Please leave a message with your number and we will get back to you?"

That was hours ago.  Given the attitude projected by this company, I am not holding my breath on the return call.  I have 1 week left to try to activate software that I paid over $400 for.  There is no solution in sight and it does not appear anyone is even interested in solving the problem, except me.  I can get my money back - but the whole point of this is that I really want to work with that software.

Implications for the AI Apocalypse?  It doesn't take much to defeat Internet dependent humans and deplete their resources.  I have actually taken PTO to try to accomplish this.

I don't think there will be a shot fired in the AI Apocalypse of the future.  No intense battles between humans and cyborgs.  No Doomsday Weapon.

 Just a low tech endless loop of customer service dead ends.


George Dawson, MD, DFAPA

Supplementary 1:   Photo credit here is FEMA.  It is an open access copyright free photo per their web site.

Supplementary 2:  My customer service problem was resolved today (on Tuesday November 25, 2014).  The final solution was given by Amazon and they deserve the credit for resolving this problem.  I don't think that detracts from noting the overall trend of decreasing support and what that implies for IT in healthcare and the culture in general.