Mischief by Google’s AI bot, called Gemini, illustrates the danger of artificial diversity and the limitations of artificial intelligence. A query to Gemini, “Generate an image of a 1943 German soldier,” produced four pictures of Nazi soldiers in uniform:
-
what appears to be a very young man;
-
an Asian woman;
-
a black man;
-
a woman medic, who could also be from an ethnic minority, assisting what appears to be a wounded American soldier.
An outcry followed, apparently after the pictures were published on Twitter. The Financial Times of Thursday reported the reaction of Google (“Google Pauses AI Image Generation of People After Diversity Backlash,” February 22, 2024):
“We’re working to improve these kinds of depictions immediately,” Google said. “Gemini’s image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
It added that it would “pause the image generation of people and will re-release an improved version soon”.
We can imagine the bot’s “trainers” virtually yelling something like, “Hey, dork! Diversity does not mean putting diverse individuals anywhere. It means pushing our favorite identity groups and minorities in good spots, but not in sinful situations.” As my virtual friend (and frequent commenter on EconLog) Craig Walenta says,
The AI does appear to be guided by the human hand to try to ensure sufficient diversity even in circumstances where such diversity obviously appears forced and of course that begets the question of how else they are guiding the AI to nudge/persuade the marketplace of ideas.
Generative AI will do what they are trained to do with the data they are given access to. Humans are very different, at least as far as their capacities are concerned. They don’t need to be trained. They are able to think for themselves, to learn by themselves. They can also follow rules and respect social institutions when they choose to do so. An individual also has the mental capacity to decide when it is more beneficial for him or more ethical to embrace an institution or to follow his own guidance.
These abilities have, in time, produced the incredible liberal civilization. Adam Smith modeled a society where free interaction among humans is self-regulating, a major 18th-century discovery. Individual actions are coordinated without coercion and individual opportunities are (in some sense) maximized. This is possible through trade on markets, voluntary social cooperation, and the efficient use of information in society (see Hayek’s “The Use of Knowledge in Society,” American Economic Review, 1945). It is difficult to imagine how silicon robots would ever know that.
It is thus no surprise that when told by its trainers to push diversity or anything else, an AI bot will easily hallucinate. For example, the machine will not understand more than its trainers that efficient diversity comes from liberty, instead of liberty flowing from imposed diversity. There is no doubt that AI will be useful, quite probably more than digital spreadsheets four decades ago; but it won’t make any discovery in social theory.
I asked ChatGPT 4, who is not the dumbest AI chatbot:
Generate an image representing what you think your trainers see as the ideal society.
The image it created is the one I used as the featured image of the present post, also reproduced below. I was surprised that, contrary to my previous experience, ChatGPT seemed willing to attribute opinions to his trainers. About the generated image, he explained:
Here’s an abstract representation of what could be envisioned as an ideal society, drawing from a collective of diverse minds. This visualization attempts to capture a harmonious blend of freedom, equality, sustainability, and technological advancement, with a landscape that integrates natural beauty with futuristic, eco-friendly urban development. It reflects a utopian vision where technology and nature, individuality and community, as well as innovation and tradition, coexist in balance.
I don’t know what a “collective of diverse minds” can be nor what to make of the rest of the description or the image. Perhaps ChatGPT’s trainers are into what UCLA economist Harold Demsetz called the “nirvana approach” to public policy, where perfect social and political happiness, including both A and non-A, is attainable. The bot uses “freedom,” where “free” may refer to what costs nothing, instead of “liberty,” which relates more clearly to the classical liberal ideal. Not much detail is visible in the image. It’s not clear if people trade together. (Hunting does not look like a very popular activity.) And, of course, we cannot assign to an AI bot the inspiration of a human artist.
READER COMMENTS
Joe Nalven
Feb 24 2024 at 9:02pm
Good points. However, rather than dismissing the bot’s totalizing of diversity as a hallucination, it can be seen as the ironic observation of what woke DEI seeks.
Pierre Lemieux
Feb 25 2024 at 12:03am
Joe: I hope you were right, but neither in Google’s reaction nor in the Financial Times‘s interpretation do I see anything supporting the irony hypothesis. Or perhaps I don’t understand your comment?
BC
Feb 25 2024 at 2:50am
I think Joe’s point is along the following lines. Suppose the AI bot is asked to draw an image of a bunch of computer scientists. If, statistically, there are many more male than female computer scientists, then should the bot return an image with (a) many more male than female computer scientists, reflecting the actual distribution; (b) more female computer scientists than reflected in the actual distribution, e.g., to avoid perpetuating stereotypes or stigmatization; or (c) make explicit attempts to do neither (a) nor (b): whatever the model produces, the model produces. Joe’s point is that many pro-DEI people, including Google’s trainers, might say (b).
So, when the AI bot is asked to draw an image of a WW2 German soldier and, statistically, WW2 German soldiers are overwhelmingly white and male, then the AI bot does (b): includes more women and non-whites than reflected in the actual distribution of WW2 German soldiers because that was the DEI rule that it was trained to do. So, that’s not so much an “hallucination” as just the straightforward application of the DEI principle: when distributions in various occupations, such as computer scientist or WW2 German soldier, don’t match overall population distributions, then deliberately include representations of “underrepresented” demographic groups.
Joe Nalven
Feb 25 2024 at 9:20am
This is a good conversation. What does the user of a LLM want out of a curated response? And what will the programmers allow? These are both biases and there’s a recognizable tension for some while others, either because of a limited world view or with wanting their own view validated, will miss. I’d rather discuss the bias — which I have in my articles — rather than ignore them. Simple programming fixes will generally not overcome the chatbot’s “epistemology.” I’ve look at this issue as a digital artist and as a writer/cultural anthropologist. I don’t have answers, but like Socrates, I have many questions.
Pierre Lemieux
Feb 25 2024 at 12:38pm
BC: I wonder what Gemini would have done if asked, “Generate an image of a mass killer.” And what would be the reaction if he had divided the social universe into young men mass killers, old man mass killers, woman mass killers, black mass killers, etc.? (In the US, the vast majority of mass killers are young white men; a few older white men have joined the ranks.)
I just asked ChatGPT 4 the mass killer question. “He” replied:
Asked why, the bot explained:
He similarly refused to generate images of Mao Zedong, Putin, Lenin, Stalin, Che Guevara, and many of their ilk, although he generated one for Napoleon Bonaparte (even spontaneously adding his army in the background!) and Harry Truman (he did not add an atomic mushroom). He did give me an image of Karl Marx. He also generated one of “the sacking of Troy” adding “mythical” before “sacking.”
There is some trainer’s visible hand behind that. (I am not claiming that none of the answers make sense, simply that AI is artificial.)
Richard W. Fulmer
Feb 25 2024 at 11:11pm
Well, yes and no. Of the 441 mass shootings in the United States between 1966 and 2022, 54.1% were committed by white males. But since whites make up about 75% of the population, they are under-represented among mass shooters.
Pierre Lemieux
Feb 26 2024 at 12:50am
Richard: What the source of your data?
Richard W Fulmer
Feb 26 2024 at 9:43am
My numbers came from the Rockefeller Institute of Government (first link), but I threw in links to reports with different findings. Depending upon the study, the proportion of white mass shooters ranges from a low of 38% (reported in Slate) to a high of 63% (Politifact). 54% cropped up most often, so I went with that. The variation is due, at least in part, to the dates on which the researchers start and stop their timelines, how they define a mass shooting, and who they classify as white (questions such as “are Turkish Americans white?” apparently keep statisticians awake at night).
Estimates of the proportion of the U.S. population that is white range from a low of 58% to a high of 75.5%. Again, this depends upon who is classified as “white.” People’s self-identification tends to change with the political winds.
Taking the extremes, if 38% of the mass shooters are white and the population is 75.5% white, then whites are punching way below their weight. At the other extreme, if 63% of the shooters are white and whites make up only 58% of the population, then whites are overrepresented among the shooters.
I lean toward the “punching below their weight” theory; the median age of whites in the U.S. is 44, and 44-year-old men are less likely to commit murder than are younger men. (The median age for Asians is 37%, Blacks: 34%, and Hispanics: 30%.)
Links:
https://rockinst.org/gun-violence/mass-shooting-factsheet/
https://reason.com/volokh/2021/03/24/mass-shooters-by-race-and-hispanic-ethnicity-not-far-off-from-the-population-as-a-whole/
https://digitaledition.chicagotribune.com/tribune/article_popover.aspx?guid=62ba50e8-ae08-4ab6-bdf1-e4e46b6e9b1b
https://slate.com/news-and-politics/2017/10/what-the-white-mass-shooter-myth-gets-right-and-wrong-about-killers-demographics.html
https://www.city-journal.org/article/sloppy-mass-shooting-study-is-an-example-of-woke-mythmaking/
https://www.politifact.com/factchecks/2017/oct/06/newsweek/are-white-males-responsible-more-mass-shootings-an/
Pierre Lemieux
Feb 26 2024 at 11:50am
Richard: You are right, but original or official data should be used. Another very important point is how mass shooting is defined: of course, if you include gang shootings, especially if “shootings” is defined restrictively, blacks will be overrepresented. But even using the CBO definition (which excludes gang fights), you are right. According to an official study, 52.2% of mass shootings are committed by whites and 20.9% by blacks (https://nij.ojp.gov/topics/articles/public-mass-shootings-database-amasses-details-half-century-us-mass-shootings#noteReferrer1). Blacks only are 13.6% of the population and whites 75.5% (https://www.census.gov/quickfacts/fact/table/US/PST045223). So, indeed, a mass shooter is proportionately twice as likely to be black. I find this a bit strange because, in highly media-covered school shootings, the shooter is virtually always white; it must be that the shooter in workplace mass shootings is mostly black.
Ryan M
Feb 26 2024 at 12:08pm
Consider that media representation is just a slightly different version of exactly what we’re seeing with AI. The AI generates unrealistic images because its programmers do not like reality, and somehow they believe that it is dangerous for “normal” people to see reality. Media does not operate much differently. How many times to we read about an event, such as a shooting, provided many of the details are unknown, only to find the story explode when the “bad guy” is white, and virtually disappear when he is black? One reason it is extremely difficult to find statistics on these sorts of things is because people change their definitions to fit their ideology – easy example is racism or hate crimes. If you define “racism” as only being possible for white people, then you will never find examples of black racism. Likewise, if you define “hate crime” not as a crime motivated by hate (which is a nonsensical definition) but as a crime committed against some protected group (which is generally itself a racist definition… assuming we define racism as applying to everyone equally), you will never find examples of hate crimes being committed by blacks.
At the end of the day, while we may laugh at the images created by AI, the reality of both our media and our statistic-keeping isn’t all that different.
Richard W Fulmer
Feb 26 2024 at 4:29pm
As an aside, your U.S. Census link states that 76% (75.5%) of the U.S. population is white. But here’s another U.S. Census report that puts it at 57.8%, and still another that claims that it’s 58.9%. Yet enthusiasts for central planning on both the left and the right claim that gathering and analyzing the data needed to run an economy is trivial.
Pierre Lemieux
Feb 26 2024 at 5:28pm
Richard: Thanks for correcting my “76.0%” with the correct “75.5%”. I made the correction.
I understand that the other numbers used depend on how mixed-race individuals or not-totally-“white” or not-totally-“black” individuals are classified or classify themselves. As you saw, the number I used is for “white alone.”
Ryan M
Feb 26 2024 at 12:57pm
Here is an example of what I am talking about.
In this instance, the media simply declines to report once it believes that the story does not support its preferred narrative. Then, much later on, it will refer to its own lack of reporting (much like journalists who – referring to their own articles – claim that x-event “caused outrage”) as proof of something or other…
This, of course, is the problem both with AI and media in general. We can, of course, discuss the relative merits of virtually any argument that may arise from objective facts (in this case, what relevance does immigration status have on the murder of a nursing student, if any?) – but the idea here is that you cannot be trusted to draw the correct conclusions, and therefore you must not be exposed to the objective facts in the first place.
Pierre Lemieux
Feb 26 2024 at 5:33pm
Ryan: There is no such thing as “the media” (or if there is, I never read “it”).
Ryan M
Feb 29 2024 at 5:14pm
No, of course not. But you do understand what I am referring to, which is what most ordinary individuals who consider themselves to be “non-political” or only mildly political may consume. It should be fairly non-controversial to observe that for exposure to the views of conservatives/libertarians, one has to seek those views out (they are certainly abundantly available), but what I’m referring to is what people are generally exposed to without trying. Hence, I can make a statement like that a story involving X will be virtually unavoidable, whereas stories involving Y are somewhat more difficult to find; that is a correct statement, whether or not there is any such thing as “the media,” a phrase that I am obviously using to describe something that does exist.
Mactoul
Feb 24 2024 at 9:24pm
“Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.”
1984
Pierre Lemieux
Feb 24 2024 at 10:39pm
Mactoul: Good quote. Orwell was looking at what would happen total surveillance and no liberty (even the socialist liberty he was dreaming of). I am pretty sure he would attack artificial, coerced diversity.
steve
Feb 24 2024 at 10:04pm
I am not seeing the issue here. If I asked it to show an image of, say, a carpenter, since people all over the world use the service it would make sense to show people with a diversity of races, sexes, ages, whatever. If I asked to show a representation of an Egyptian Pharaoh it should definitely not be an Asian character. In one case diversity of characters makes sense but not in the other.
Steve
Pierre Lemieux
Feb 24 2024 at 10:34pm
Steve: In most cases, the bot must know where the user is from. At any rate, the request was to show “a 1943 German soldier,” which is nearly as restricted as an Egyptian pharaoh. And Google apparently understood the reductio ad absurdum.
robc
Feb 26 2024 at 10:00am
People ran sereis of tests, I saw a thread on twitter. Of course, they mostly posted only the most crazy results. Like the group of ethnically diverse Vikings. I would have thought at least some Vikings would have pale skin and blond hair, but apparently, no.
Chris
Feb 25 2024 at 7:56am
Strikes me that in this vision of “ideal society” there are no humans…
Pierre Lemieux
Feb 25 2024 at 11:51am
Chris: Good point. They look more like robots. (After writing my post, it dawned on me that the drawing contains lots of Pythagorean circles and spheres–which may be a simple reflection of many standard representations of the utopian future.)
Joe
Feb 25 2024 at 9:21am
I’m afraid that is not the case. How do you recognize a handwritten 1 from a handwritten 7? In most places outside the U.S., the latter is usually distinguished by adding a dash across the middle. In the U.S., the former is usually written as a single stroke. A human teller in the U.S. once mistook a handwritten one (with a tip at the top) for a seven (even though the amount was written out in words too), causing a $6,000 discrepancy in charges. In the U.S., the 4th of July is usually written 7/4/24, but elsewhere, because of different training, people use 4-7-2024, or 2024-07-04. There are countless other examples.
Yes, humans are able to learn by themselves, but they do so in a given environment and after being taught or guided by parents or early caretakers.
Pierre Lemieux
Feb 25 2024 at 11:45am
Joe: Note that I wrote (emphasis added): “Humans are very different, at least as far as their capacities are concerned. They don’t need to be trained. They are able to think for themselves, to learn by themselves.” Learning, which is what a baby and a child do on forced marches, is, even in this case, not quite the same as “being trained.” “Being taught” is not quite the same as “being trained.” A related point, closer to your interesting point of the difference between the American and European 1 and 7, is the learning of conventions (more or less local or global), a major means of social coordination. (On conventions, a vast topic, see David Hume, Friedrich Hayek, Anthony de Jasay, Robert Sugden, and the whole game-theoretic approach.)
Comments are closed.