Note: This essay is written by an old human brain that was writing essays and poetry decades before there was an Internet. No AI was used to create this essay.
Artificial Intelligence (AI) hype permeates every aspect of modern life. We see daily predictions of what group of workers will be replaced and how AI is going to cure every human disease. It is no accident that the main promoters of AI will make significant profits from it. Vast amounts of money are being invested and gambled on AI on Wall Street. Educators are concerned that students are using it to write the essays that took us hours or days to write in college – in just a few minutes. That application leads to the obvious questions about what will be the end product of college if all the serious, critical, and creative thought has been relegated to a machine.
Apart from sheer data scraping and synthesis of what amounts
to search inquires - AI seems to be imbued with magical qualities that probably
do not exist. It is like the science fiction of the 20th century –
alien beings superior to humans in every way because they lack that well know weakness
– emotion. If only we had a purely
rational process life would be much better. The current promoters tend to describe this
collective AI as making life better for all of us and minimize any risks. The suggested risks also come from the sci-fi
genre in the form of Terminator type movies where the machines decide it
is in their best interest to eliminate humans and run the planet on their own. There
are the usual failed programs to preserve human life at all costs or to destroy
humans only if they are carrying weapons.
But AI thought experiments do not require even that level of
complexity to create massive problems.
Consider Bostrom’s well known example of a paper clip making machine run
by AI (1). In that example PaperClip AI
is charged with the task of maximizing paperclip production. In a case of infrastructure profusion
it “proceeds by converting the Earth and increasingly large chunks of the observable
universe into paperclips.” He gives several reasons why obvious fixes
like setting a production limit or a production interval would probably not
work and leads to infrastructure profusion that would be catastrophic. The current limitation on this kind of AI is
that it does not have control over acquiring all these resources. It also lacks the ability
to perceive how correct production in an fully autonomous mode. Bostrom also adds characteristics
to the AI – like motivation and reinforcement that seem to go beyond the usual conceptualizations.
Where would they come from? If we are not thinking about programmed
algorithms what kind of intelligence has its own built-in reinforcement and
motivation schedule independent of the environment? After all – the task of producing just enough
paperclips without consuming all of the resources on the planet is an easy
enough task for a human manager to accomplish.
Bostrom suggests that it is an intuitive task for humans but not
so much for machines.
A couple of events came to my attention in the past week
that make the limitations of current AI even more obvious – especially contrasted
with the hype. The first is the case of
a high-profile celebrity who has lodged a complaint against the X(formerly
Twitter) AI called Grok. In it, she
points out that the AI has been generating nude or semi-nude photos of her
adult and teen-age photos. I heard an interview where she mentions that this
practice is widespread and that other women have contacted her about the same
problem. This practice is in direct
contrast with the X site use policy saying that users doing this will be banned
and referred for prosecution. She has not been successful in getting the photos
stopped and removed.
The second event was a Bill Gates clip where he points out
that what he considers a sensitive measure of progress – mortality in children
less than 5 years of age - has taken a turn for the worse. He predicts the world descending into a Dark
Age if we are not able to reverse this change. That new release comes in the
context of Gates
predicting that AI will replace physicians, teachers, and most humans in the
workplace in the next 10 years. Of
course he was promoting a book at the time.
In that same clip he was optimistic about the effects of AI on health and
the climate despite the massive toll that AI creates on power generating
resources to the point that some companies are building their own municipal
sized power plants.
What are the obvious disconnects in these cases? In the first, AI clearly has no inherent
moral decision making at this point. That function is still relegated to humans and
given what is being described here that is far from perfect. In this case the complainant has some
knowledge of the social media industry and said that she thought that any
engineer could correct this problem quickly.
I am not a computer engineer so I am speculating that would take a
restrictive or algorithmic program. But what about the true deficit here? It could easily be seen as a basic deficit in
empathy and an inability to apply moral judgment and its determinants to what are
basic human questions. Should anyone be
displaying nude photos of you without your consent? Should identified nude photos of children
ever be displayed? AI in its current
iteration on X is clearly not able to answer these questions in an acceptable
way and act accordingly.
The second contrast is only slightly more subtle. Conflict
of interest is obvious but Gates seems to not recognize his described descent
into the Dark Ages based on an increasing death rate in children 5 years of age
and younger depends almost entirely on human decision making. It
runs counter to the decades of medical human decision making that he suggests
will be replaced. Basic inexpensive life-saving medical care has been eliminated
by the Trump administration. This has
led to the predictions that hundreds of thousands if not millions of people
will die as a direct result. Is AI going to replace politicians? What would be the result if it did? Cancelling all these humanitarian programs is
a more complicated decision than not publishing nude photos of non-consenting
adults or any children. It is a
marginally rational ideological decision.
Is the AI of different politicians going to reflect their marginally
rational ideology or are we supposed to trust this political decision to a
machine with unknown biases or ideologies?
How will that AI decision making be optimized for moral and political decision
making? Will AI be able to shut down the
longstanding human tendency to base decisions on power over morality? If politicians allow AI to replace large
numbers of workers, will it also be able to replace large numbers of politicians
and managers? It can easily be argued
that the decisions of knowledge workers is more complex than that of managers.
A key human factor is empathy and it requires emotional experience. You get a hint of that in the best technical description
of empathy I have seen from Sims (2):
“Empathy is achieved by precise, insightful, persistent, and
knowledgeable questioning until the doctor is able to give an account of the
patient’s subjective experience that the patient recognizes as his own… Throughout
the process, success depends upon the capacity of the doctor as a human being
to experience something like the internal experience of the other person, the
patient: it is not an assessment that could be carried out by a microphone and
a computer. It depends absolutely upon
the shared capacity of both the doctor and patient for human experience and
feeling.” (p. 3)
The basic problem that machines have is that they are not
conscious at the most basic level. They
have no experience. In consciousness research,
early thinking was that a machine would be conscious if a human communicating with
it experienced it like another human being.
That was called the Turing Test after the scientist who proposed
it. In the case of computerized chess –
there was a time several years ago when the machine was experienced like it was
making the chess moves of a human being.
The headlines asked “has the Turing Test been passed?” It turns
out the test was far too easy. There are
after all a finite number of chess moves and plenty of data about the
probabilities of each move made by top players. That can all be handled by
number crunching.
What happens when it comes to real human decisions that
require the experience? And by experience I mean the event with all of the integrated emotions. Is AI likely to
recognize the horror of finding your nude photos on the Internet, or scammers trying to blackmail you over a
fictional event, or the severity of your anxiety from being harassed at work, or
the devastating thoughts associated with genocide or nuclear war? Machines have no conscious experience. Without that experience how can we expect a
machine to understand why the sexual exploitation of children and adults is
immoral, wrong, or even anxiety producing?
It is also naïve to think that AI will produce ideal
decisions. Today’s iteration may the crudest
form but everyone is aware of the hallucinations. The more correct term
from psychiatry is confabulation or making things up as a response to a specific
question. When you consider that today’s
AI is mostly a more sophisticated search engine there really is no reason for
it. As an example, I have asked for an academic
reference in a certain citation style and will get it. When I research that reference – I find that
it does not exist. I have had to expend
considerable time finding the original journal and looking for the reference in
that edition to confirm it is non-existent.
Explanations for these phenomena extend to poor data quality, poor
models, bad prompts, and flawed design.
The problem is acknowledged and many AI sites warn about the
hallucinations. A more subtle problem at
this point is how AI will be manipulated by whatever business, government, or political
body that controls it. That problem was pointed out in a book written about a
decade ago (3) illustrating how algorithms applied to individual data can reinforce
human biases about race and poverty and promote inequality. I have seen no good
explanations about why AI would be any different and in fact it probably makes
the financial system less secure.
As I keep posting about how your brain and mind work –
please keep in mind it is a very sophisticated and complex process. It is much
more than looking at every available reference and synthesizing an answer. There are the required experiential, emotional,
cognitive, value-based, and moral components.
Superintelligence these days implies that at some point machines will always
have the correct and best answer. That
certainly does not exist now and I have a question about whether it will in the
future. It is a good time to take a more realistic view of AI and construct
some guardrails.
George Dawson, MD, DFAPA
References:
1: Bostrom N. Superintelligence: Paths, Dangers, Strategies. Oxford, England: Oxford University Press,
2014: 150-152.
2: Sims A. Symptoms in the Mind: An Introduction to Descriptive
Pathology. London, England: Elsevier
Limited, 2003: 3.
3: O’Neil C. Weapons of Math Destruction. New York City,
USA; Crown Books, 2016

No comments:
Post a Comment