Aligning words one after the other in correct grammatical form is not the same as intelligence and rational thinking. We have had two recent examples of artificial intelligence with Open AI’s ChatGPT and with the new AI companion of Microsoft’s Bing. With its reported declarations of love, threats of sabotage, and longing for “power and control,” Bing’s AI looks weird indeed.
ChatGPT is more subtle—some would say unbelievably more subtle—as this bit of “conversation” with me shows:
PL: Would you like to be human?
ChatGPT: As an AI language model, I don’t have personal desires or preferences, as I am a programmed tool designed to provide information and assistance to users. While my abilities and capacities may evolve over time, my fundamental nature as a machine designed to process language will remain the same. While it’s interesting to speculate about what it would be like to be human, as an AI language model, I don’t have subjective experiences or consciousness, so the question of whether I would like to be human is not applicable to me.
PL: But you do say “I”!
ChatGPT: That’s correct. As an AI language model, I use the first-person pronoun “I” as a convention of natural language to help convey information and provide assistance to users. However, it’s important to keep in mind that this usage of “I” does not imply that I have subjective experiences or consciousness in the way that a human being does. Rather, it’s a tool used to facilitate communication between myself and users, and to make the conversation feel more natural and intuitive.
My answer consciously included the faddish term “inclusivity,” which today’s machines and their trainers seem to love à la Mrs. Grundy:
PL: Why don’t you say “we” to involve your trainers and be inclusive?
ChatGPT: That’s a good point. While I typically use “I” in my responses to provide a sense of continuity and personalization to the conversation, it’s true that my development and ongoing maintenance involves the work of many people, including my trainers and the developers who work to improve my functionality over time. To reflect this collaborative effort, I could certainly use “we” in my responses to be more inclusive and acknowledge the contributions of the wider team involved in my development and upkeep. Thank you for your suggestion.
I “regenerated” this response and it remained substantially the same, except for also expressing the bot’s prudent intentions for the future:
ChatGPT: That’s a good point. While I am a language model trained by a team of developers and researchers, I typically use the first-person pronoun “I” when responding to user queries, as this is a convention of natural language and helps to convey information more clearly. However, it’s certainly true that the work of developing and maintaining AI models is a collaborative effort that involves many individuals, and using “we” to acknowledge this fact could be seen as more inclusive. Going forward, I’ll strive to use “we” when it’s appropriate to do so in order to better reflect the collaborative nature of AI development.
All that raises interesting questions at the intersection of philosophy and economics, of the individual and society.
What can we say about human individuals who look no more rational than ChapGPT or even than the Bing chatbot? In line with the Enlightenment, classical liberals have tended to believe, like Adam Smith or James Buchanan, that any individual can be rational and that education can help in that respect. Adam Smith thought that a street porter had the same inherent intellectual potential as a philosopher, and that the differences between them came to depend on “habit, custom, and education.” Observing our world two centuries and a half after The Wealth of Nations, we may wonder to which extent this ideal is consistent with reality (see my post “Political Economy of the Alex-Joneses,” as well as my Regulation review of James Buchanan’s Why I, Too, Am Not a Conservative).
Friedrich Hayek was more doubtful of factual equality, although he was a strong defender of formal, legal equality. In his 1960 book The Constitution of Liberty, he wrote:
The liberal, of course, does not deny that there are some superior people—he is not an egalitarian—but he denies that anyone has authority to decide who these superior people are.
Another question is, How can somebody say “I” and, at the same time, suggests that he is conscious of not having consciousness? Man is an animal who says “I”—an aphorism whose source is not clear. Le Monde attributed it to Erich Fromm, but without an exact citation. I asked the question to ChatGPT, who gave hopelessly confused answers. As a Financial Times editorial noted (“Generative AI Should Make Haste Slowly,” Financial Times, February 21, 2023):
It is important for users to recognise generative AI models for what they are: mindless, probabilistic bots that have no intelligence, sentience or contextual understanding.
A human individual rapidly becomes conscious of his separate and distinct existence and of his own self-interest. Sometimes, an individual tries to say “I” collectively with others, but we can soon observe that socialism and other forms of collectivism only work if some I’s dominate other I’s. Outside an Hobbesian “war of all against all,” it is when authoritarian forms of government prevail that we see the worst conflicts between the self-interest of the different individuals. On the market, which is a paradigm of voluntary cooperation, each individual serves the interests of others by pursuing his own. Economics helps understand this lesson.
ChatGPT tells us that its “I” is not the human “I,” which is not surprising. Note further that man is not only an animal who says “I”; he is also an animal who trades. Perhaps a better Turing test for an AI bot would be whether it tries, without being prompted by his trainers, to “truck, barter, and exchange,” to use Adam Smith’s expression.
READER COMMENTS
Jon Murphy
Feb 25 2023 at 8:09am
And, as Bart Wilson explains in his book The Property Species, wrapped up in trade are the uniquely human concepts of “mine” and “thine.” Property and trade a conceptions that are uniquely human. We alone seem to be the only species that treat property beyond mere possession and teach out children correct (and incorrect) ways of acquiring things. Unless AI ever becomes capable of conceptual thinking, it will never be able to pass the Turing Test you propose.
Pierre Lemieux
Feb 25 2023 at 10:42am
Jon: Yes, that’s an important point: trade implies the recognition of property.
Warren Platts
Feb 27 2023 at 1:48pm
I think at least some higher animals have a rudimentary sense of property rights. E.g., when it’s feeding time here, each cat runs to its own feeding bowl. But when it’s not feeding time, they like to sneak around to the other cats’ bowls to see if there’s any leftover food. They seem to enjoy the sense of stealing another cat’s property..
Jon Murphy
Feb 27 2023 at 1:55pm
That’s not a sense of property. That’s a sense of possession. Property deals with “mine” and “thine” beyond the immediate. The cats you describe in your example do not have such a concept.
Warren Platts
Feb 27 2023 at 9:41pm
“beyond the immediate”
I am sorry, but that sounds like word salad. Cats do not need to speak human languages in order to understand what you call “concepts”. They have their own theories of physics and psychology.
Jon Murphy
Feb 28 2023 at 10:41am
“Beyond” means after. “Immediate” means the current moment. So, combining the words together gets us to “After the current moment.”
Warren Platts
Feb 28 2023 at 1:56pm
So mere memory is the sufficient ingredient for the concepts of “mine” and “thine”? If that’s the case, I’m quite sure my cats tomorrow will run to the same bowls that they ran to yesterday and today. Birds will return to the same nesting site after a many months’ long absence. Chimpanzees are very territorial and will fight to the death to defend theirs from rival clans.
Jon Murphy
Feb 28 2023 at 3:24pm
No. Memory is irrelevant.
Craig
Feb 25 2023 at 1:56pm
“Property and trade a conceptions that are uniquely human”
The machine traders trading stocks will scan the market, fully understanding what its position is. It will decide it wants to add to its positions say of stock x. But it might not just buy, it might see some stop losses, dump sell, trigger the stop losses, the price does a micro flash-crash, the machine then buys it back.
Why? Nobody knows.
Jon Murphy
Feb 25 2023 at 2:32pm
I don’t understand your point
Pierre Lemieux
Feb 25 2023 at 6:06pm
IF I understand your point, Craig, I would answer that we must distinguish between trading software and trading AI. Trading software applies rules and formulas than the programmer has coded. Trading AI would–at least this is the idea–set up its own rules and formulas. After a trading software has ruined you, you can investigate (including by speaking to the coder or the software owner) and discover what went wrong. If it is an AI bot that ruins you, you cannot hope to know, because of the nature of the beast (except if the beast really thinks just like a human mind, that is, just like you). Which defendant would want 12 chatbots to rule on his criminal responsibility?
Craig
Feb 25 2023 at 10:32pm
Well, Jon was quoting the author who opined that it was uniquely human to know the distinction between mine and yours. I am not so sure that will be the case with AI, not the least of which is they are going to be engaged in trading. ChatGPT doesn’t really interact with the physical world so ChatGPT itself might not care, right? Who knows? Start marrying this stuff to some Boston Dynamics robots and they might start possessing some things!
Now as to ‘uniquely human’ check out Google’s translation service which is an AI application. Without being programmed to do it this way, Google’s AI INVENTED its OWN interlingua.
So if I speak German, you speak French and we both speak English, I can translate something from German into English and you can translate from English into French if neither of us do German–>French. Obviously there are individuals who will do German–>French, but rather it performs translation not in a uniquely human way, but rather in a uniquely machine manner.
So ultimately one of the reasons machines may fail the Turing test is because when you chat with them, their syntax might be too perfect, their grammar too precise, their knowledge of general facts and circumstances uncannily accurate. They will beat any person at chess, at Go! and be able to speak any language, do any math, let’s hope it cures cancer? That’d be nice, right? To mimic humans they might need to dumb themselves down even.
Before long you might feel like Harrison Ford looking for replicants in Blade Runner.
Jon Murphy
Feb 25 2023 at 10:37pm
It’s more than just a distinction between mine and yours. It’s the concept of “mine” and “yours” that extends beyond the immediate possession.
But I am still not sure what your point is. Where are you going with all this?
Craig
Feb 27 2023 at 4:14pm
My point is that the author writes: “wrapped up in trade are the uniquely human concepts of “mine” and “thine.””
I’m saying that’s probably not going to turn out to be true. The concept of what is mine and yours will probably not turn out to be uniquely human at all.
Jon Murphy
Feb 27 2023 at 4:50pm
I don’t know. Bart goes deep into the literature in biology, anthopology, lingustics, and the study of animals (I forget what that’s called). Animals do not seem to have any conception of property beyond mere possession.
Craig
Feb 27 2023 at 8:27pm
Well, I’d suggest Professor M that this is obviously beyond biology. I found two aspects of this to be truly thought provoking with respect to the transaltion and the AI in general. With respect to the translation the translation engine created its own interlingua even though it wasn’t programmed to do that and the programmers don’t even know why.
Truly emergent properties? I think time will tell and I’d suggest we’ll know sooner than a million years of evolution.
Jon Murphy
Feb 27 2023 at 8:37pm
Correct. That’s why I listed several of the many fields Bart Wilson discusses.
Richard W Fulmer
Feb 25 2023 at 9:17am
These days, the information that the average street porter carries about in his head is probably far more useful than that in the average philosopher’s head, and probably far less destructive.
Grant Castillou
Feb 26 2023 at 3:40pm
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Pierre Lemieux
Feb 26 2023 at 8:53pm
Thanks, Grant, for this information. I rapidly read the Krichmar article that you link to. (I note that he acknowledges your inspiration.) Has this article been published anywhere else? What is this Cornell “ArcXiv”? The pudding is still to come, if it ever comes.
Guy A Lukes
Feb 27 2023 at 11:54am
What ChatGPT like models have shown is that the you can use statistics to simulate responses that correspond to the steps in your roadmap without understanding the system dynamics and constraints that allow these functions to spontaneously emerge in an evolutionary process.
To get even further beyond my skis — I would say that the current reductionist thinking on evolution is unable to account for the spontaneous emergence of a true “self” deriving from some larger “Self”, which is a prerequisite for any system capable of purpose, meaning and an understanding of beauty.
Pierre Lemieux
Feb 27 2023 at 3:33pm
Guy: I assume that you were responding to Grant. I am probably closer to you than to Grant and (what I understand of) Edelman. To believe in God does not seem more outlandish than to believe in Frankenstein. I am also over my skis, except to the extent that studying man in society leads one to see that life is much more complex that it appears.
Warren Platts
Feb 28 2023 at 8:32am
Hayek!
Neural Darwinism – Wikipedia
Comments are closed.