Tyler Cowen tries to answer. I’ll add my comments.He writes,
we’ll be much better at measuring which research Ph.d’s are contributing value and which ones are not, or at least we’ll think we are. Since academic achievements follow a Power Law, that will mean a huge ouch for many would-be academicians. The new professor will need to be skilled in assembling collages of information, raising money, and communicating to broader public audiences. Either that or his research should be very obviously of the top order. The distribution of income across professors will become radically less equal as indeed the trend has been for well over a decade now.
I guess it depends on who pays for the research. I think we already know that the value of research in some fields is close to zero, but the taxpayers and patrons funding the research do not seem to mind.
I think that if value added in research and/or teaching become more measurable, the entire credentialist model could unravel. That would be huge. It would wipe out 90 percent of the present education industry, if not more.
Tyler then looks into what might be safe investments for $10 million in a Black Swan world.
If you have $10 million, the safest thing to do is to diversify across currencies, buy government securities of various kinds, hold $1.5 million in gold, and otherwise not invest at all. Oh yes, invest in some cheap hobbies. In a real crunch remote land is worthless — transport costs…
Keep in mind that one of the Black Swans would be contagious sovereign debt defaults in the U.S., Europe, and other mature developed countries. Maybe gold takes care of you in that scenario, but I would not be so swift to rule out land in potential safe havens. Of course, if the U.S. is very weak, it is not clear what is a safe haven–Singapore can be threatened by China, for example.
The point about cheap hobbies may seem odd, but not from a Masonomics perspective. Another way to put it is that the cost of living will be really low, except for status competition. Think about what your friends spend money on, and how much of it is related to maintaining status.
My cohort of friends bought houses in expensive neighborhoods and/or sent their children to private schools. They spent a lot on college tuitions. They pay a lot for bar mitzvahs and weddings. All of that can be chalked up as competing for status in terms of their children.
There are interesting status competitions that are much less expensive. If you cultivate interest in those, you can live on very little money.
One question is health care. If you could get the right kind combination of insurance policies (long-term care? Super-catastrophic? insurance against bad outcomes from genetic testing?), that would be a valuable asset.
Tyler writes,
I believe that machines will never outcompete humans across the board
I agree that Singularians are far too optimistic about artificial intelligence. It is a variation of the “fatal conceit” problem. Most of human intelligence is tacit knowledge, consisting of elaborate metaphors that are originally acquired from sensory experience. Artificial intelligence is an attempt to arrive at the same point through top-down design. I’m being glib here. Sorry.
I think that the progress of computers and robots will be economically significant but not paradigm-shifting. The big paradigm shift will come from bioengineering. That will challenge our view of what it is to be human, what it means to have an ecological system, etc.
READER COMMENTS
fundamentalist
Oct 27 2009 at 1:12pm
“…the safest thing to do is to diversify across currencies…”
As a mainstream economist, that’s all you can do because you have no idea what will happen to the economy in the future. Everything is random, so diversify as much as possible.
However, if you understand the Austrian business cycle, you can do much better than that.
wm13
Oct 27 2009 at 1:54pm
Regarding the credentialist model, won’t law firms and banks still need to identify the best and brightest? (Maybe other employers too, I confine myself to the fields I know.) It is only PhDs that would lose value, I think, not JDs and MBAs.
Note that there are right now a handful of intellectual disciplines now which do not rely on academic credentials. (Scholarly genealogy comes to mind.) Possibly disciplines that are today academically-based will look like that.
rvman
Oct 27 2009 at 2:41pm
However, if you understand the Austrian business cycle, you can do much better than that.
See immediate prior post for refutation.
Grant
Oct 27 2009 at 3:01pm
Wouldn’t the future of bio-engineering depend on how legal or regulated it is? I could easily see the median voter wanting to ban human genetic engineering.
Emulating the human brain in software seems like a much more realistic goal than AI, and it would be paradigm-shifting.
steve
Oct 27 2009 at 4:47pm
I think the definition of “the singularity” in the sense meant by its advocates is that nothing is really predictable from here through to the other side of the event.
So disregarding the premise as conclusion, I will make my own bets. The price of many types of information is approaching zero if not its value. What happens if the price of intelligence approachs zero? I would place my bets on physical things the rarer and more dispersed the better.
I guess this is a bet that intelligence is an easier nut to crack then say light speed or the cost effective transmutation of matter. I would not consider little colored peices of paper with pictures on them rare.
mobile
Oct 27 2009 at 5:35pm
we already know that the value of research in some fields is close to zero, but the taxpayers and patrons funding the research do not seem to mind
Sorry, but if the patrons don’t mind funding it, then by definition the value is not zero.
Marcus
Oct 27 2009 at 7:24pm
“Most of human intelligence is tacit knowledge, consisting of elaborate metaphors that are originally acquired from sensory experience. Artificial intelligence is an attempt to arrive at the same point through top-down design.”
I don’t disagree.
But I think there are a couple of things you’re not considering:
1) Computers can be programmed to learn ‘bottom-up’, ie. from experience.
2) The resulting information can be passed on perfectly to the next generation of computers.
steve
Oct 27 2009 at 8:18pm
I agree mobile. Value and price are two different things. What I am suggesting is what if a computer could essentially do as good a job as a trained engineer? The intelligence embedded in the computer has no less value but very well may have a much lower price. Oddly or maybe not so oddly, it may not have much effect on the price of an artist.
Robin Hanson
Oct 28 2009 at 8:36am
Arnold, perhaps you haven’t consider the whole brain emulation scenario, where the machine would directly inherit all of a human’s tacit knowledge.
Robert Scarth
Oct 28 2009 at 4:26pm
“Most of human intelligence is tacit knowledge, consisting of elaborate metaphors that are originally acquired from sensory experience. Artificial intelligence is an attempt to arrive at the same point through top-down design. I’m being glib here. Sorry.”
Its true that most of human intelligence is tacit knowledge, but that’s not a relevant point wrt intelligent computers. The point is, that knowledge, whether tacit or not, is encoded in the physical structure and state of the human brain (& probably other parts of the body as well). If it can be encoded in a human body then it can, in principle, be encoded in a computer. The point to make wrt the fatal conceit is that while the knowledge can be encoded in a computer it can’t be encoded in a computer which is “simpler” than a human body; and more generally the economy (society) cannot be encoded in a computer “simpler” than than the economy (society) and therefore any top-down attempt to control the economy (society) will inevitably miss out necessary knowledge that a market (liberal) system would not miss out.
CJ Smith
Oct 29 2009 at 8:58am
Marcus:
Computers do not learn, per se. At best, they remember sequences of events and can apply complex pre-determined decision-tree analyses (if-then statements) to issues. This process can appear to mimic learning, but lacks a fundamental component of true learning – the ability to extrapolate connections between otherwise unrelated areas of knowledge to new endeavours.
Thus, you can create a program that “learns” how to play chess, but the program merely compares the situation to forecast possible solutions under a pre-determined, albeit complex decision paradigm created by human programmer. The program appears to “learn” in two ways – first, by “memorizing” your prior plays in similar situations, and evaluating both your most likely response and optimal alternative strategies. The second way the computer appears to learn is by cheating – pushing its analytical horizon more and more moves into the future. The classic rule of thumb in chess is that an averge player plays 3 moves in advance, an above average player 5 moves, and a master or grandmaster 7 – 10 moves in advance. A program with sufficient processing power can evaluate well beyond that horizon.
But no matter how good the chess program, it won’t be able to play checkers, backgammon, or other strategy boardgames without additonal programming specific to that game, because the program can’t extrapolate and apply the underlying learning concept – strategic thinking and foresight – outside of its program limited area.
Michael Vassar
Oct 29 2009 at 12:18pm
I don’t think that ANY AI researchers I have met are unaware that “Most of human intelligence is tacit knowledge, consisting of elaborate metaphors that are originally acquired from sensory experience.” and I can think of none for whom “Artificial intelligence is an attempt to arrive at the same point through top-down design.” is a valid claim. In any event, the term “singularity” refers to greater than human intelligence, however and whenever it happens, and regardless of whether it happens through biotech or AI. AI can be more abrupt, but people underestimate how abrupt biotech driven transitions could be.
Marcus
Nov 2 2009 at 7:28am
CJ Smith,
Sorry for the late reply.
Programs like chess are purpose written programs where the programmers (ie. experts) program into the application all the knowledge they know about the game of chess. That’s ‘top down’ and it’s not what I’m talking about.
What if the purpose written program was a ‘brain’ application, like a neural network, written to learn about its environment. That would be bottom up. Give it some set of senses to perceive the world with and set it on its way.
Of course, this stuff is hard to do. No doubt about it. But we know for a fact it can be done because we have a working example: the human brain.
Comments are closed.