Doug Garnett’s Blog

Menu

Humans are More Unique than the Existential Dread of AI Suggests

<strong>Humans are More Unique</strong> than the Existential Dread of AI Suggests

I am no fan of the exaggerated claims we hear arriving from the shiny bauble hype machine for ChatGPT or other large language models (LLMs). I’m also no fan of “end of the world” fear mongering found at the other end of the spectrum.

Let’s look, instead, at a different question: Why are so many drawn to the idea that Artificial Intelligence threatens an end to the human race?

Humans Love the Threat of an Apocalypse

When I was a wee lad (in the 1960s), we were rightly worried about nuclear apocalypse. But this was just a start. Environmentalists warned of apocalyptic ends for prominent species like Grizzlies, Mountain Lions, Wolves, Bald Eagles, Condor, and more. Religious figures predicted holy apocalypse (as they have throughout history). And there was prediction of over-population so extreme that mankind would not survive the early 1980s. In the 1990s we were even told Y2K would cause computers to destroy mankind. Of course, we didn’t have that nuclear apocalypse, those species recovered with government protection, there has (yet again) been no religious apocalypse, today we’re told America faces an under-population apocalypse, and solid hard work by programmers made Y2K a non-event. Sigh.

Yet it continues. While there is justifiable concern about climate change today, we are bombarded with messages threatening mankind’s very existence. Yup. Another apocalypse.

Humanity seems to love a good apocalypse. Yet the history of mankind reveals extraordinary ability to adapt and thrive despite change — even disaster. Speaking of history, threats of an AI apocalypse are nothing new.

Musical Apocalypse: AI Generated “Chopin”

In her excellent book on AI, Melanie Mitchell describes how, in the 1990s, a musician trained a computer to write works in “in the style of classical composers such as Bach and Chopin”. No one had thought it could be done. But the computer created quite passable pieces.

Douglas Hofstadter (author of Gödel, Escher, Bach and Mitchell’s mentor) discussed the computer composer with an audience including music theory and composition faculty at the prestigious Eastman School of Music. Then he had a pianist play two pieces. One was an obscure (hard to recognize) mazurka by Chopin. The other was a mazurka composed by the computer. Asked to guess which piece Chopin wrote, the faculty mostly chose the computer generated piece.

It’s not a surprise. What the computer created was a stereotype of Chopin — a piece sounding just like what we would expect. This takes skill but it is not creativity.  When Chopin was born, this “stereotype” didn’t exist — at all. As Chopin developed he became an incredible composer AND developed a specific style. Yet every new work expanded that style as he tested new ideas based what he uniquely heard, how he understood the piano, and his own unique sensibilities. In other words, a new piece didn’t fit a “stereotype” such as a computer could generate — but was somehow based on what is uniquely, humanly Chopin. Had he lived longer, he would have written works, then, which would have expanded what we now consider his stereotype.

Related to music we have artistic panic developed by today’s shiny MidJourney image generating AI bauble. Except all MidJourney does is create stock photography based on user prompts. That is sometimes useful. Yet MidJourney is not “creative.” Can we calm down?

Intelligence Apocalypse:  Computers Playing Chess

Today’s existential crisis echoes (very loudly) a similar existential crisis when computers were finally able to play chess so well they beat chess masters.

This fact shook humanity’s belief in its own intelligence. Why? We had been told that chess was the ultimate indication of human intelligence. Yet it took only 40 or so short years for mankind to develop computers which beat the top world chess masters. What we should have learned from chess was that human intelligence is unusually complex and also very hard to articulate.

What did we learn about chess and intelligence? Very few humans, it turned out, were exceptional at chess because it required specific abilities — yet it turned out computers were better fitted to those abilities like searching trees of possible moves or databases of chess board situations and their solutions. Chess wasn’t a showcase of human intelligence — only a showcase of certain narrow parts of it.

Today’s Apocalypse:  Large Language Models like ChatGPT

We do not face an existential crisis from AI. What we face, instead, is a mass market version of the chess surprise. Once, again, many were wrong in what they considered to reflect human “intelligence.”

Of course, this is a bit funny when we realize what an LLM does is (essentially) Google things then select specifics and report back using words which appear to make sense which are within a chosen style. It’s even more unusual when we learn LLM’s just make shit up if they can’t find what they need. I’ll even suggest this IS a difficult part of human intelligence — used one way it makes a bullshit artist while used another it creates the next great American novel.

Yet LLMs are nowhere close to being intelligent. The Digerati, though, like claiming they are. Why? Some days it seems that, wanting to be masters of the universe, they claim superhuman abilities for devices and software. Since they aren’t superhuman (it’s just software), they denigrate human abilities so their software might appear superhuman. Thus, they suggest the human mind is only a computer (it’s much different and more). They also repeat an ancient error stretching back into antiquity which suggests mind is separate from body. (It’s not that simple.)

The Whole of Human Reality

Human beings are incredible and a result of exceptional evolution. We have evolved to thrive in a world far more complex than data can capture. Of course, sometimes that can look quite messy. Neatniks might hate the idea but one interesting truth from studying complexity is that mess is often critical along human paths to success.

As to body/mind, philosopher Andy Clark notes how  borders around “mind” are fuzzy. For example, mathematicians rely on a notepad and pencil as if it were part of their mind in pursuing their work. Clark also notes that our physical environment is also part of our mind. Other studies show the brain is not our mind and the borders on our body are far more permeable. For example, part of what we hear arrives through nerves throughout our body and not only our ears — one reason a live concert is different from listening to music with headphones.

Human Complexity

We should take comfort in the brilliant evolutionary path which led to modern humans. We are, after all, biological beings who live and work in groups while also being independent. Our intelligence arrives from interacting parts of our biology, physical world, emotional and psychological world, as well as ideas and concepts we embrace. Along with this intelligence are other human abilities — to love, to make friends, to judge who to trust, to anticipate possibilities, to invent as Chopin did, to have children and raise a family, to create a path in the world toward our own success, to choose when to listen to instinct and when to rely on reason, and much, much more.

Unfortunately, it seems hard for employers to comprehend humans. This is not too surprising. Managerial practice preaches that business success arrives by logic and reason. But logic and reason, while exceptionally useful, reflect only a portion of the abilities possible from each individual human.

Having said, that, our next existential crisis with AI seems predictable: It’s not going to be that hard for AI to become capable of logic and reason. In other words, those are not the whole of human intelligence. And those who mistakenly believe they are will be shocked when AI delivers them.

Employment and the Existential Dread of AI

The greatest waste in America is failure to use the abilities of people.
W. Edwards Deming

Human abilities result from our whole being and are far more than only the specific and narrow skills a company thinks it needs. So just imagine what might happen in our economy if companies enabled employees to use a far greater portion of their whole abilities?

Unfortunately, there is evidence that especially narrow minded managers enamored of AI prefer the certainty of robotic limits to the doing far more by relying on human beings.

That businesses do this feeds into the fear that AI will replace humans. Leaving human abilities untapped, companies gladly believe that by using AI they can limit their difficulties with human fallibility — yet human fallibility resides side-by-side with tremendous human potential. These companies are in for a HUGE surprise from AI’s uncertainty — never knowing if its answers have been hallucinated.

Regardless, any crisis in jobs from this generation of LLMs will be the fault of managers, executives, consultants, and the AI hype machine. Let’s hope that humanity goes on proving it is far more than what many narrowly expect.

Peace and be well.

©2023 Doug Garnett — All Rights Reserved


Through my company, Protonik LLC, I consult with companies as they design and bring to market new and innovative products. I am writing a book exploring the value of complexity science for driving business success. Protonik also produces marketing materials including documentaries, websites, and blogs. As an adjunct instructor at Portland State University I teach marketing, consumer behavior, and advertising.

You can read more about these services and my unusual background (math, aerospace, supercomputers, consumer goods & national TV ads) at www.Protonik.net

Categories:   Big Data and Technology, Business and Strategy, Complexity in Business, Innovation

Comments