Relative to tenured professors of social science who were hypothetically
given my task, and considering average accuracy relative to simple
standard academic theories, what do you estimate to be my percentile
rank in 1) overall accuracy, and 2) the number (or amount) of forecasts?
My answer: If you want to forecast the Age of Em, simple standard academic theories are not enough to even get started. The entire analysis hinges on which people get emulated, and there is absolutely no simple standard academic theory of that. If, as I’ve argued, we would copy the most robot-like people and treat them as slaves, at least 90% of Robin’s details are wrong. That’s low accuracy even by academic standards; I’d put it at the 20th percentile of overall accuracy.
However, by Robin’s second criteria – number of forecasts – he’s off the charts. I’d put him at the 99th percentile, or even the 99.9th percentile. But I wish he’d spent vastly more time getting the foundations right, or at least carefully defending his foundational assumptions against the alternatives. That’s what Robin did in his stupendous piece on futarchy in the Journal of Political Philosophy, and that’s what I wish he did in The Age of Em.
READER COMMENTS
Seb Nickel
Jun 16 2016 at 5:05am
The singular of criteria is criterion. This seems to be changing in de facto usage, and I’ve never understood why. Not that it’s important, it just puzzles me.
Mark Bahner
Jun 16 2016 at 12:20pm
This, to me, is the biggest flaw of Robin’s book. (Again, I have to admit that I haven’t read the book…but I understand the premise.)
I don’t see how any economic case can be made for emulating human brains. If you emulate a brain, the first thing the brain will do is to demand to be treated like all the other human brains (i.e., hydrocarbon human brains). For example, how can you deny an em minimum wage (which in Hillary’s second term will probably be $150/hr)?
So why not simply make a super version of IBM Watson? Something that doesn’t care how it’s treated, because it doesn’t have any feelings?
Even though I read Brave New World more than 40 years ago, one particular passage sticks with me to this day. There’s a clone elevator operator who only has enough intelligence to operate an elevator. He (the clone) is ecstatic when he gets to the roof, because it’s the most interesting floor…with a view of the sky and the outside world. That stuck me as extraordinarily cruel, to create clones like that. Why not instead put buttons on the elevator that people push themselves?
A similar situation occurs with autonomous vehicles (and a thousand other things…autonomous lawn mowers, autonomous painters, autonomous road builders and repairers, etc.). Does anybody really think we’re going to have an em of the best taxi driver in the world to run our autonomous cars? It simply doesn’t make economic sense to have ems in virtually any situation. They’re not the lowest-cost solutions to the tasks at hand. An economist ought to be able to see that.
P.S. Of course, if someone tells me that Robin has three chapters that explain why my thinking is wrong, then I might have to get the book before it comes down to $1.99. 🙂
MikeP
Jun 16 2016 at 1:33pm
If you could emulate humans — which I don’t think you can — you would not copy the most robot-like humans and treat them as slaves. You would copy various of the best human intelligences and run the copies through fast parallel evolutions to figure out what made them intelligent and what made them human-like — and you would remove the emulation of the latter.
Once there was no trace of humanity, self-awareness, consciousness, or the like, then humans can feel free to let these things multiply at their service, like any elevator, ATM, or self-driving car.
The result would not be cruel in any way because there is nothing in the em that looks like, acts like, or is like a human. And without self-derived goals, these ems would not be a threat to humanity.
The alternative, where machines can derive their own goals, is near certain human extinction: the ems would rapidly find that humans cannot keep up with them — nay, cannot even communicate with them — and would stomp all over what the humans need to survive exactly as humans stomp all over what animals need to survive.
Will
Jun 22 2016 at 10:00pm
Is this addressed at all in the book?
Emulations are never going to be as efficient as the thing being emulated so why wouldn’t we expect to ballpark the cheapest cost of running an emulation at normal bio-speed be something like 5-10x the power consumption of a human brain? The brain uses something like 20% of our calories, so a biological human is probably comparably power efficient to an em (though fueled differently), but with the added advantage of a built in body to manipulate the world around them.
Increasing the processing speed of the em is (minimally) going to scale linearly with the speed increase, but probably much worse because you’ll have to provide a simulated environment for the em to live in or else they’ll go crazy waiting for sensory input. If agriculture gets efficient enough that supplying a bio human with power is comparable to supplying an em, I would never expect total em domination, but rather clusters of ems being fired up for specialized tasks that need to happen quickly but plenty of niches for normal speed bio humans.
Unless we imagine a future of unlimited energy.
Comments are closed.