[Note: The Age of Em (in italics) refers to Robin’s book. The Age of Em (not in italics) refers to the hypothetical future era when whole-brain emulations become practically feasible.]
While I’m thrilled that Robin Hanson has published his first book, and have high hopes for the social science of futurism, The Age of Em greatly displeases me. If he rigorously reasoned from his premise – whole brain emulations – I could just suspend my technological disbelief. Unfortunately, the reasoning simply isn’t very rigorous. When someone asks me, “Show me the greatness of Robin Hanson,” I have to look elsewhere.
Granting Robin’s technological premise, here are the book’s main weaknesses, roughly in the order they appear in the text. Some of my criticisms arguably bolster Robin’s hopes for the future; my point, in each case, is simply that Robin is making big literally false or deeply misleading statements.
1. Robin only pretends to dodge philosophy of mind. His words:
[M]ost who discuss ems debate their feasibility or timing, ponder their implications for the philosophies of mind or identity, or use them to set dramatic stories. Such discussants usually ask: is it conscious? Is it me? Is it possible? When will it come? How can it enrich my story?
In this book I instead seek realistic social implications–in what sort of new social world might ems live?
Steering clear of philosophy of mind seems like an excellent way to bypass deadlocked debates and break new ground on a separate set of issues. Unfortunately, Robin doesn’t actually do this. Instead, he tacitly accepts an extreme version of “Ems are just as human as you or me” – and builds the whole book on this assumption. The tell-tale sign: The Age of Em says vanishingly little about the lives of biological humans during the Age of Em! Or as Robin tells us early on:
(If you can’t see the point in envisioning the lives of your descendants, you’d best quit now, as that’s mostly all I’ve got.)
To me, and I believe almost everyone else, my “descendants” are the biological humans. To Robin, in contrast, my “descendants” are ems. Frankly, he seems so wedded to this philosophical (not social scientific) position that he can’t even feign agnosticism. What would feigning look like? Split the book evenly between discussion of the lives of biological humans during the Age of Em and ems during the Age of Em.
2. Robin exaggerates how dramatically humans have changed over time:
The problem is, the future will probably hold new kinds of people. Your descendants’ habits and attitudes are likely to differ from yours by as much as yours differ from your ancestors. If you understood just how different your ancestors were, you’d realize that you should expect your descendants to seem quite strange. Historical fiction misleads you, showing your ancestors as more modern than they were. Science fiction similarly misleads you about your descendants.
I don’t read much historical fiction, but I have read a lot of fiction from earlier eras. And contrary to Robin, I see a largely constant human nature. Characters in Shakespeare seem as credible to me – nay, more credible – than characters in modern fiction. Ancient Roman historians make as much sense to me – nay, more sense – than modern historians. The same goes, in my view, for cross-cultural research. As the Roman poet Terence (195/185-159 BC) put it, “I am human, I consider nothing human alien to me.”
3. Robin has a bizarre definition of “marginalized”:
Just as foragers and subsistence farmers are marginalized by our industrial world, humans are not the main inhabitants of the em era. Humans instead live far from the em cities, mostly enjoying a comfortable retirement on their em-economy investments. This book mostly ignores humans, and focuses on the ems, who have very human-like experiences.
Suppose foragers and subsistence farmers owned 90% of the industrial world’s financial assets, housing, and so on. Even if they were never CEOs or served on boards of directors, we would not call them “marginalized.” Why? Because though they are outnumbered, they are fabulously wealthy and ultimately in charge. The same goes for biological humans in Robin’s scenario. They’ll be outnumbered, and perform little “hands-on” work. But they’ll be fabulously wealthy and ultimately in charge. Or so it would seem – and Robin makes little effort to show otherwise.
4. Contrary to Robin’s suggestion, there’s near-zero correlation between income and conservatism. His words:
Foragers tend to have values more like those of rich/liberal people today, while subsistence farmers tend to have values more like those of poor/conservative people today. As industry has made us richer, we have on average moved from conservative/farmer values to liberal/foragers values…
Robin could say he’s defining “conservatism” in a technical or apolitical way. But when you’re writing for an audience, the author rightly bears the burden of highlighting non-standard usage. On standard definitions, it’s far more accurate to say that rich people and societies are more socially liberal but more economically conservative.
5. Robin’s conclusions only sound “taboo” because he’s using language strangely. His words:
This book violates a standard taboo, in that it assumes that our social systems will mostly fail to prevent outcomes that many find lamentable, such as robots dominating the world, sidelining ordinary humans, and eliminating human abilities to earn wages.
In plain English, Robin predicts that biological humans have a great life in The Age of Em. Robots will “dominate” us no more than rank-and-file workers “dominate” shareholders. That’s a very comforting vision of the future, except perhaps for sci-fi fans.
6. Robin greatly overstates the difference between his “em scenario” and the “generalized AI scenario.” How so? The Age of Em makes numerous arguments by analogy: Since humans typically do X in situation Y, and ems are copies of humans, ems will also typically do X in situation Y. But he also keeps telling us that only a tiny hand-picked sub-sample of humans will be copied. The obvious question: Why wouldn’t ems largely be copies of the most “robot-like” humans – humble workaholics with minimal personal life, content to selflessly and
uncomplainingly serve their employers? This in turn implies that most of Robin’s “detail” is roughly the opposite of what would really happen.
7. Robin’s efforts to calm readers’ fear of the future consistently backfire. Example:
Readers of this book may find near subsistence wages to be a strange and perhaps scary prospect. So it is worth remembering that such wages in effect applied to almost all animals who ever lived, to almost all humans before a few hundred years ago, and for a billion humans still today. Historically, it is by far the usual case.
Imagine a middle-class American’s child is destined to earn a subsistence wage. Would it make the parent feel better to hear, “No big deal, your child will face the same fate as almost every animal who ever lived, almost all humans before a few hundred years ago, and a billion humans today”? No, even worse!
8. On his own terms, Robin greatly overstates the quality of life for ems. His words:
Yes, “poor” ems spend a large fraction of their time working. But such ems need not suffer physical hunger, exhaustion, pain, sickness, grime, hard labor, or sudden unexpected death. Widespread use of automation makes most jobs at least modestly mentally challenging. As most ems are poor, em poverty does not inflict the same pain of low social status that it does in societies such as ours where most people are rich. Ems could be assured of very high-quality entertainment during leisure time, and of a comfortable indefinite retirement when they were no longer competitive at work.
The obvious question: Why wouldn’t ems’ creators use the threat of “physical hunger, exhaustion, pain, sickness, grime, hard labor, or sudden unexpected death” to motivate the ems? Robin elsewhere talks about “torturing” ems, so why not? And of course, the best way to make such threats credible is to enforce them out without mercy.
To be fair, Robin does say:
It is possible that stronger punishment involves direct pain, and this has often happened in the distant past. But the extreme rarity of this practice today suggests that pain is not very useful as a motivator for workers in advanced industrial jobs, and so is also only rarely useful for em workers.
But again, Robin misses the obvious retort: If employers tried using pain to motivate free workers, the workers would quit. Modern systems of slave labor – see Stalinist Russia and Nazi Germany – used pain freely, because the penalty for quitting was death. Even Stalin’s nuclear scientists feared execution if they failed to produce a nuclear bomb in a timely manner – and as you’d expect, this fear was a powerful motivator.
9. Robin’s arguments for his single craziest claim – global GDP will double every “month, week, day, or even faster” – are astoundingly weak. Yes, Argument #1 has superficial appeal:
Special three-dimensional (3D) printers have been created that can print about one-half of their components in about 3 days of constant use (Jones et al. 2011). If the other half could be made just as fast, a 3D printer could self-replicate in a week. If the other half of the parts for a 3D printer took ten times longer to make, then a 3D printer could self-replicate in 5 weeks.
Together, these estimates suggest that today’s manufacturing technology is capable of self-replicating on a scale of a few weeks to a few months.
In the real world, however, there are literally hundreds of bottlenecks that radically retard this kind of growth. Politically, something as simple as zoning could do the trick. Robin will naturally appeal to selection – the em economy will launch in whatever country has the most em-positive regulatory environment. But the most favorable political environments on earth still have plenty of regulatory hurdles – especially for technologies that pose a threat to reigning powers. And politics aside, we should expect bottlenecks for key natural resources, location, and so on. As an engineer, I’m sure Robin’s heard of Murphy’s Law. Furthermore, if ems are bad at any crucial task, biological humans have to take up the slack, in their usual sluggish meat-space way.
Robin’s Argument #2 has no appeal at all:
Another way to estimate the economic growth rate of the next era is to assume that the next era will grow faster than our industrial era by a factor similar to the factor by which our era grows faster than the farming era, or by which the farming era grew faster than the forager era. This method estimates a roughly 1 week to 1 month economic doubling time for the next era. While this is admittedly only a weak clue regarding future growth rates, we should not ignore it as it is one of the few concrete clues available.
This alleged “concrete clue” is nothing compared to the bona fide “concrete clue” that almost all fantastic claims are false. And the idea that the global economy will start doubling on a monthly basis is fantastically fantastic. This has to be the least Bayesian part of the book: We start with a claim with a near-zero prior probability, make a couple of flimsy arguments, and somehow end up with a high posterior probability. I’d like to be more charitable, but I can’t.
Lest you think Robin is just speculating about economic growth in the Age of Em, here’s his punchline: “For a concrete estimate to use in the rest of this book, based on all of the above, I
choose an economic doubling time of 1 month.” Personally, I’d be amazed if an em economy doubled the global economy’s annual growth rate. Which would be awesome, of course.
10. Robin’s argument against the Terminator scenario is much weaker than it looks. His words:
Because ordinary humans originally owned everything from which the em economy arose, as a group they could retain substantial wealth in the new era. Humans could own real estate, stocks, bonds, patents, etc. Thus a reasonable hope is that ordinary humans become the retirees of this new world. We don’t today kill all the retirees in our world, and then take all their stuff, in part because such actions would threaten the stability of the legal, financial, and political world on which we all rely, and in part because we have many direct social ties to retirees. Yes we humans all expect to retire today, while ems don’t expect to become human, but em retirees are vulnerable in similar ways to humans. So ems may be reluctant to expropriate or exterminate ordinary humans if ems rely on the same or closely interconnected legal, financial, and political systems as humans, and if ems retain many direct social ties to ordinary humans.
The problem: As Robin explains, in one human year, ems experience millennia. So even if each generation of ems only has a .5% chance of expropriating humanity, the chance of expropriation per human year is around 40%. If Robin’s general picture is correct, what the ems see as the end of age-old human dominance will happen right before the eyes of the Age of Em’s first generation of humans. With fire and blood.
P.S. Is there a contradiction between Criticism #5 and Criticism #10? No. Criticism #5 says existing humans would be pleased to live in the Age of Em as Robin describes it. Criticism #10 says existing humans would be horrified by the Age of Em as the logic of Robin’s position suggests it would unfold.
P.P.S. Fortunately, per Criticism #6, Robin’s analogies between humans and ems are largely spurious. If ems ever came into being, they’d be heavily selected for docility and pose no serious threat to mankind even in the long-run. So despite everything, I hope the Age of Em comes. I just think it’s astronomically unlikely.
READER COMMENTS
Lawrence D'Anna
Jun 7 2016 at 12:51am
I would have been extremely disappointed if he did otherwise.
Listen: Everything that is worth saying about dualism vs. materialism has been said. The debate is not only deadlocked but also profoundly familiar and boring. Why even bother writing this book if all he’s going to do is rehash materialism vs. dualism? I’m sure the Stanford encyclopedia of philosophy has an excellent summary of the debate going all the back to Democritus or something. We can go read that if for some bizarre reason we feel a need to.
Robin’s a materialist and he wrote a book taking materialism as a starting point, and then tried to work out some of the implications. What’s wrong with that?
PS. yay materialism boo dualism.
Partial Spectator
Jun 7 2016 at 6:34am
I think one of the reasons Robin pays little attention to describing biological humans is that he is writing partly for an em audience.
Bryan’s point 9 is mostly economics, where both he and Robin are experts. It would be interesting to see his debate with Robin about economic growth in the Age of Em, although I suspect Robin has thought about it more carefully, so I would bet on Robin’s victory in such debate.
Bill Woolsey
Jun 7 2016 at 7:32am
What are the EM’s producing (doubling each month) and why?
Material goods and services for the biological humans who are retired and living on their investments?
Computers and power plants to house more EMs? We reproduce ems to fill all computer space and they direct robots that build more computers and power plants.
Or is it creating fancy video games/novels or virtual visual art or music that ems can enjoy in their virtual existence? I guess that could double each week in volume of stuff, but how valuable would it be on the margin?
Is there some elite of EMs that would enjoy this stuff? Most of them are supposedly working or sleeping all the time.
There is something very uneconomic about this vision (that looks like science fiction world building without bothering with plot or characters.)
George
Jun 7 2016 at 8:47am
Wouldn’t organic humans control the EMs from the beginning and so a small group of people would reap the rewards?
According to Robin killing/copying EMs would be very easy. Under these circumstances EM culture would never be able to form. All of the culture that they would have would be uploaded from their original biological copies. This underlying culture could be problematic (conscientious objector, questionable morals/work ethic etc) hence selecting for a docile EM as Bryan suggests seems very plausible. Or perhaps they would “de-humanize” the EM and keep a robot with EM capabilities.
On top of this why spend money/time/energy to keep an EM running if it is not being productive? If killing is so easy why even bother paying for EM services?
We don’t pay computer programs to solve our problems by lines of code. When we’re done using a program we close out and open a new instance.
ChrisA
Jun 7 2016 at 9:36am
“Wouldn’t organic humans control the EMs from the beginning and so a small group of people would reap the rewards?”
I see a lot of people very worried about that scenario, so let’s play it out. Case 1 is where these people controlling the EMs or AIs don’t trade or give us any of the benefits of the EM/AI technology. In that case we are no better off than before, but no worse off. They are like people who might theoretically exist on some other solar system with better technology than ours, but irrelevant.
Case 2 is where they trade with us. In that case we are certainly better off than before, why trade otherwise? Of course you might say that they price their goods so cheap we wouldn’t be able to compete. If that was the case, what would we trade? We would be back to Case 1.
Last case is where they give us free stuff (since essentially all EM production is free to them). We can hardly be worse off by being given free stuff.
All the above of course assumes that the people owning the EMs can prevent others developing the same technology. No-one is hording I-phones and preventing the masses from getting one so they can look great and get all the girls in the nightclub.
Anonymous
Jun 7 2016 at 10:47am
Regarding point 6, I would be interested to hear what Bryan thinks intelligence is. In my observation, this concern, that ems will inevitably become mindless within a short time, is usually driven by a view of intelligence that says it’s a single algorithm; that humans’ biases and quirks exist only due to evolution being so awful a designer; and that therefore in a world where brains can be modified, these features would inevitably be cut out, leaving just the functional Intelligence part, with none of the parts we like, such as emotions and opinions and consciousness, that are valuable to us but are evolutionarily maladaptive.
Yet Bryan has previously said he views the debate between Eliezer and Robin on the likelihood of AI doom as having been won by Robin. It seems to me that if you see intelligence as being just one simple algorithm, with all the other stuff our brains do able to be cut out without issue, the idea that when you get that algorithm on a computer, it takes over the world, follows quite naturally. So what am I missing here?
Topher Hallquist
Jun 7 2016 at 11:03am
Bill makes a good point.
Imagine in 2116, world per-biological-human GDP is around forty to fifty times the minimum cost of a nutritionally-complete diet for a biological human. Right now, the ratio is about 5.5, so if we use price of food as our universal yardstick for prices, that’s about an eight-fold increase in per-biological-human GDP, or a little over 2% per year.
But let’s also imagine that many things fall dramatically in price relative to the price of the minimum cost of a nutritionally-complete diet. Maybe, thanks to AI chefs and waiters, eating in a restaurant barely costs any more than eating at home. Personal assistant and concierge services that are currently only available to the very rich become essentially free (perhaps with “freemium” or advertising-based funding model). A butler android costs about as much as a washing machine. And so on.
Has this hypothetical future experienced eight-fold economic growth? Or something much more dramatic?
Mark Bahner
Jun 7 2016 at 12:39pm
To me, it’s flat-out mind-boggling how little attention economists pay to the potential effects of artificial intelligence on economic growth. The effects of AI on economics is probably *the* most important issue in economics…within less than 2 decades.
How could that be? Well, simply imagine the population of earth expanding to trillions of people inside a decade. (And then quadrillions of people in the following decade.)
P.S. Bryan Caplan likes to win bets. I have a bet that’s available to him if he’s willing to predicted gross world product per (human, carbon-based) capita for the 21st century:
Why global inflation/deflation-adjusted economic growth will accelerate (barring global thermonuclear war or takeover by terminators)
James Oswald
Jun 7 2016 at 2:25pm
An Em would have to be based on a brain at a particular point in time. Once you got the task or calculation out of that Em that you wanted, would there be a reason to leave the simulation running?
For example, turn on an Em Chef brain, tell it to come up with a recipe for whatever, then turn it off when you’re done. I guess Robin would see that as murder, but I don’t people would.
Tom West
Jun 9 2016 at 8:40am
Robin’s a materialist and he wrote a book taking materialism as a starting point, and then tried to work out some of the implications. What’s wrong with that?
The idea that we have whole brain emulation, but not enough knowledge to suppress self-awareness in the ensuing creation seems almost ridiculous to me.
Keep the intellect, eliminate self-awareness – problem of motivating or fearing ems solved.
This seems to me caving into the need for a human-related narrative (with respect to the ems) rather than common sense. I’d say it’s simply dualism on a different level.
Robin Hanson
Jun 9 2016 at 10:45am
Bill, the em era is a subsistence economy, and most of what a subsistence economy produces is the min required to subsist. For ems, that is computers, energy, cooling, structure, communication, etc.
George, whether or not you pay ems, they cost. And controlling isn’t always so easy. Consider how much “control” stockholders have today over public firms.
James, for many jobs workers learn usefully over time about customers, tools, suppliers, etc.
Tom, we don’t know of a “self-awareness” part of brains you can cut out and leave the rest working fine.
Tom West
Jun 9 2016 at 9:20pm
My apologies. If I’d had half a brain, I’d have realized their being a good chance the author would read this post and reworded more considerately.
Tom, we don’t know of a “self-awareness” part of brains you can cut out and leave the rest working fine.
Agreed, but then we’re nowhere near whole-brain emulation. Perhaps it’s lack of imagination on my part, but I cannot imagine us managing whole brain emulation *without* obtaining a *vastly* better understanding of how every component of our brain works.
I suppose it’s my personal bugaboo, but as a strict materialist, I’m hyper aware of our desire to make there be something magical or unknowable about human consciousness, and having whole brain emulation *without* the ability to manipulate the brain in almost every conceivable way just seemed to veer dangerously towards the magical. After all, you’d have an infinite number of beings to experiment upon!
Anyway, perhaps I’m being unfair – after all the book, which was quite enjoyable outside of pressing that one particular button of mine, should be allowed its premise. It’s just that premise seemed (to me) so implausible.
Mark Bahner
Jun 9 2016 at 10:19pm
Over time, design of computers has resulted in capabilities closer and closer to human: e.g., ENIAC, TRS-80, Macintosh, Siri, Watson. It seems very, very unlikely that design of computers will leap to computers with fully human capabilities and simultaneously have the human brain faults of envy, greed, prejudice, etc.
Todd Kreider
Jun 10 2016 at 3:35am
Mark Bahner:
This used to blow my mind as well in 2004 to 2006, but can you think of an economist (besides Robin Hanson) who knows anything at all about technology?
Of course, Arnold Kling has at least been open minded of the possibility for serious future growth.
Mark Bahner
Jun 13 2016 at 12:54pm
Well, I think the things that Robin knows are mostly wrong* 😉 so maybe it’s helpful to know less. 🙂
I’m not sure it’s absolutely necessary to know that much about technology in order for economists to provide very helpful insights about how artificial intelligence might affect economic growth. Here are some hypothetical questions that an economist presumably might be able to provide insight on, even knowing virtually nothing about technology:
1) Suppose the human population increased from 7 billion to 7 trillion in 10 years…what would that do to economic growth?
2) Now, suppose those additional people did not need food or housing similar to biological humans, but only needed electricity and shelter from the rain?
3) Suppose those people had no mobility or limited mobility…but could communicate thousands of times faster than humans can speak, type, or even read?
*P.S. Some things that Robin knows that I think are wrong are: 1) that ems are likely to be the means by which artificial general intelligence is achieved, 2) that artificial general intelligence is proceeding at a pace such that it isn’t likely to happen for at least 100 years, and 3) that artificial general intelligence is even that big a deal (that “Kludge AI” won’t be spectacular enough.
Sonely
Jul 5 2016 at 5:11pm
[Comment removed pending confirmation of email address. Email the webmaster@econlib.org to request restoring this comment. A valid email address is required to post comments on EconLog and EconTalk.–Econlib Ed.]
Comments are closed.