As a Financial Times headline says, “AI in Finance Is Like ‘Moving from Typewriters to Word Processors’” (June 16, 2024). But, I think, not much further, despite all the excitement (see “Ray Kurzweil on How AI Will Transform the Physical World,” The Economist, June 17, 2024). At least, doubts are warranted regarding the “generative” form of AI. (IBM defines generative AI as referring to “deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.”)
The conversational and grammatical capacities of an AI bot like ChatGPT are impressive. This bot writes better and appears to be a better conversationist than what must be a significant proportion of human beings. I am told that he (or she, except that the thing has no sex and I am anyway using the neutral “he”) efficiently performs tasks of identification and classification of objects and that he does simple coding. It’s a very sophisticated program. But he crucially depends on his humongous database, in which he makes zillions of comparisons with brute electronic force. I have had occasions to check that his analytical and artistic capacities are limited.
Sometimes, they are astonishingly limited. Very recently, I spent a couple of hours with the latest version of DALL-E (the artistic side of ChatGPT) trying to have him understand correctly the following request:
Generate an image of a strong individual (a woman) who walks in the opposite direction of a crowd led by a king.
He just could not understand. I had to elaborate, reformulate, and re-explain many times, like in this modified instruction:
Generate an image of a strong and individualist individual (a woman) who walks in the opposite direction of a nondescript crowd led by a king. The woman is in the foreground and walks proudly from west to east. The crowd led by the king is in the close background and walks from east to west. They are going in opposite directions. The camera is south.
(By “close background,” I meant “near background.” Nobody is perfect.)
DALL-E was able to repeat my directives when I tested him, but he could not see the glaring errors of his visual representations, as if he did not understand. He produced many images where the woman on the one hand, and the king and his followers on the other hand, walked in the same direction. The first image below provides an intriguing example of this basic misunderstanding. When the bot finally drew an image where the woman and the king walked in opposite directions (reproduced as the second image below), the king’s followers had disappeared! A child learning to draw recognizes his errors better when they are explained to him.
I said of DALL-E “as if he could not understand,” and that is indeed the problem: the machine, actually a piece of code and a big database, simply does not understand. What he does is impressive compared to what computer programs could do until now, but this not thinking or understanding–intelligence as we know it. It is very advanced computation. But ChatGPT does not know that he is thinking, which means that he is not thinking and cannot understand. He just repeats patterns that he finds in his database. It looks like analogical thinking but without the thinking. Thinking implies analogies, but analogies don’t imply thinking. It is thus not surprising that DALL-E did not suspect the possible individualist interpretation of my instruction, which I did not spell out: a sovereign individual declined to follow the crowd loyal to the king. A computer program is not an individual and does not understand what it means to be one. As suggested by the featured image of this post (also drawn by DALL-E after much prodding, and reproduced below), AI cannot, and I suspect will never be able to, understand Descartes’s Cogito ergo sum (I think, therefore I am). And this is not because he cannot find Latin in his databases.
Nowhere in his database could DALL-E find a robot with a cactus on his head. The other Dali, Salvator, could have easily imagined that.
Of course, nobody can forecast the future and how AI will develop. Prudence and humility are required. Advances in computation will likely produce what we would now consider miracles. But from what we know about thinking and understanding, we can safely infer that electronic devices, as useful as they are, will likely never be intelligent. What’s missing in “artificial intelligence” is the intelligence.
******************************
READER COMMENTS
Scott Sumner
Jun 19 2024 at 3:15pm
“this not thinking or understanding–intelligence as we know it. It is very advanced computation.”
Isn’t the problem that we actually have no idea what intelligence is? Human intelligence could very possibly be nothing more than “advanced computation”. How could we know that computers won’t exceed our intelligence?
Pierre Lemieux
Jun 19 2024 at 8:44pm
Scott: You’re right: We don’t know what conscience is (even just in the sense of how it works) more than we know what life is. And the more we learn, it seems, the more complex they appear. And we can’t forecast the future. But doesn’t it seem that there is an infinite difference between cellular automata and life or between AI and conscience?
Scott Sumner
Jun 21 2024 at 3:08pm
I believe there is a big difference between current human thought and the thinking of various types of AI, as you suggest. I’m less sure that this will always be true. Presumably there’ll always be some differences, but I can imagine future AIs that have quite a few attributes of human intelligence. I could imagine future AIs that can create new theories, for instance.
Overall, I’m fairly agnostic on whether AIs will ever have consciousness, partly because I’m not exactly sure what human consciousness actually is.
Pierre Lemieux
Jun 22 2024 at 11:40am
Scott: I think that a story of this morning (June 22) in the WSJ, “AI Doesn’t Kill Jobs? Tell That to Freelancers,” suggests interesting hypotheses on what AI can do and cannot do.
Craig
Jun 19 2024 at 3:56pm
I have read many different predictions about AI’s trajectory, naturally some include the variety of Skynet/kill off humanity. I am not familiar enough with the technology yet to know what is possible, indeed at times ChattyG seems vacuous, at other times it seems creative and in the very next instance seems obtuse to me.
With respect to viruses, biologists debate whether they constitute something that is ‘alive’ and for sure one can debate that, but here’s the thing, living or not viruses still exist and have the impact that they have whether they are alive or not. If AI is similar, perhaps it may not have actual intelligence, but perhaps it may have an apparent intelligence, will the distinction matter? If AI does AI perhaps, philosophically it may be a vacuous agency, but it will still be, in point of fact, some kind of agency.
Pierre Lemieux
Jun 19 2024 at 9:00pm
Craig: I just asked Chatty, “How can you, ChatGPT, understand anything without having conscience, that is, without knowing that you think?”
His answer is, in a sense, surreal because (Dali squared) “he” explains that he doesn’t understand. But then, the answer suggests, if we can take “him” seriously, that there is a difference between understanding/intelligence and computation (a question Scott rose). Here is “his” reply verbatim, produced in a couple of seconds:
“Understanding, in the context of an AI like me, is fundamentally different from human understanding. Here’s how it works:
Pattern Recognition and Data Processing
Pattern Recognition: AI models, like ChatGPT, are trained on vast amounts of text data. During training, the model learns to recognize patterns in the data, such as syntax, semantics, and contextual relationships between words and phrases.
Data Processing: When given a prompt, the AI processes the input by matching it against these learned patterns. It doesn’t “know” or “understand” in the human sense, but it can generate responses that align with the patterns and structures it has seen before.
Algorithmic Response Generation
Statistical Models: The AI uses statistical models to predict the most likely next word or phrase based on the input it receives. This process involves complex algorithms that calculate probabilities for different possible responses.
Context Handling: The AI can maintain context over a conversation by keeping track of previous inputs and outputs, allowing it to generate responses that are coherent and relevant.
Lack of Consciousness
No Awareness: Unlike humans, AI lacks self-awareness, emotions, and subjective experiences. It does not “think” or “know” in the way humans do. Instead, it processes input data and produces output based on learned patterns without any understanding of meaning or consciousness.
Functional Understanding: AI’s “understanding” is functional and operational. It can simulate understanding by generating appropriate responses, but it does not possess awareness or comprehension.
Practical Applications
Utility in Tasks: Despite the lack of consciousness, AI can perform many tasks effectively, such as answering questions, generating text, translating languages, and more, by leveraging its pattern recognition and data processing capabilities.
In essence, AI’s “understanding” is a product of its training on large datasets and its ability to process and generate text based on learned patterns. It is a sophisticated form of data manipulation, but it does not involve consciousness or self-awareness.”
Roger McKinney
Jun 20 2024 at 10:15am
Good points! AI is nothing but the data mining of 20 years ago, but faster thanks to the switch to graphics processors. It will never improve investing because it’s limited by the data we have, which isn’t very good.
Hayek made that point in his Nobel speech. The problem isn’t the models, it’s the lack of the right data. We have vast amounts of the wrong kind of data.
Hayek also wrote that humans will never understand our brains completely because for one entity to understand another it must be more complex than the subject. So AI can never approach the complexity of the human brain.
Matthias
Jun 21 2024 at 6:43am
Language models and picture creators are fun, but in finance you probably got more leverage from eg ingesting huge amounts of numbers about companies etc and the using that to help with eg market making or investment decisions.
Ron Browning
Jun 22 2024 at 2:52pm
When a future AI program scans a Walmart shopper inspecting a Crosman air rifle, will the program be able to differentiate between the shopper holding: sporting goods, a toy, a weapon, merchandise, a gift, a model 2100 classic, or any other potential accurate descriptions?
Roger McKinney
Jun 24 2024 at 8:51pm
https://www.crosman.com/dpms-sbr-with-red-dot-sight
Good question?!!! Have you seen this BB gun?!!!
David Seltzer
Jun 25 2024 at 12:43pm
Pierre: Re cogito, ergo sum. Descartes was dining in a fine restaurant. A server approached and asked, “Monsieur Descartes, would you like another glass of sherry?” He replied, “I think not”…and disappeared.
Pierre Lemieux
Jun 26 2024 at 12:07pm
David: Theory matters!
David Seltzer
Jun 26 2024 at 3:43pm
Pierre: I don’t disagree. I was engaging in a little philosophy humor. If you recall, Descartes resolved his uncertainty with “Cogito, ergo sum.” If Rene doubted everything around him, his thoughts were proof he existed. Even if the evil genius deceived him, there had to be a “him” actively thinking.
Comments are closed.