In graduate school, I recall a professor suggesting that the rational expectations revolution would eventually lead to much better models of the macroeconomy. I was skeptical, and in my view, that didn’t happen.
This is not because there is anything wrong with the rational expectations approach to macro, which I strong support. Rather I believe that the advances coming out of this theoretical innovation occurred very rapidly. For instance, by the time I had this discussion (around 1979), people like John Taylor and Stanley Fischer had already grafted rational expectations onto sticky wage and price models, which contributed to the New Keynesian revolution. Since that time, macro seems stuck in a rut (apart from some later innovations from the Princeton School (related to the zero lower bound issue.)
In my view, the most useful applications of a new conceptual approach tend to come quickly in highly competitive fields like economics, science and the arts.
In the past few years, I’ve had a number of interesting conversations with younger people who are involved in the field of artificial intelligence. These people know much more about AI than I do, so I would encourage readers to take the following with more than grain of salt. During the discussions, I sometimes expressed skepticism about the future pace of improvement in large language models such as ChatGPT. My argument was that there were some pretty severe diminishing returns to exposing LLMs to additional data sets.
Think about a person that reads and understood 10 well-selected books on economics, perhaps a macro and micro principles text, as well as some intermediate and advanced textbooks. If you fully absorbed this material, you would actually know quite a bit of economics. Now have them read 100 more well chosen textbooks. How much more economics would they actually know? Surely not 10 times as much. Indeed I doubt they would even know twice as much economics. I suspect the same could be said for other fields like biochemistry or accounting.
This Bloomberg article caught my eye:
OpenAI was on the cusp of a milestone. The startup finished an initial round of training in September for a massive new artificial intelligence model that it hoped would significantly surpass prior versions of the technology behind ChatGPT and move closer to its goal of powerful AI that outperforms humans. But the model, known internally as Orion, didn’t hit the company’s desired performance. Indeed, Orion fell short when trying to answer coding questions that it hadn’t been trained on. And OpenAI isn’t alone in hitting stumbling blocks recently. After years of pushing out increasingly sophisticated AI products, three of the leading AI companies are now seeing diminishing returns from their hugely expensive efforts to build newer models.
Please don’t take this as meaning I’m an AI skeptic. I believe the recent advances in LLMs are extremely impressive, and that AI will eventually transform the economy in some profound ways. Rather, my point is that the advancement to some sort of super general intelligence may happen more slowly than some of its proponents expect.
Why might I be wrong? I’m told that artificial intelligence can be boosted by methods other than just exposing the models to ever larger data sets, and that the so-called “data wall” may be surmounted by other methods of boosting intelligence. But if Bloomberg is correct, LLM development is in a bit of a lull due to the force of diminishing returns from having more data.
Is this good news or bad news? It depends on how much weight you put on risks associated with the development of ASI (artificial super intelligence.)
READER COMMENTS
Jim Glass
Nov 13 2024 at 9:31pm
Yes, I’ve seen a lot about this, declining marginal returns to training, depletion of material to train on, explosion of the energy costs of training — to the point of reviving nuclear power just for AI, including reopening Three Mile Island.
I’ve no idea if this is good news or bad news for the future of humanity. But I suspect it’s going to be pretty bad news for most of the many firms that have gotten billion-dollar valuations for themselves on zero net revenue by calling themselves “AI leaders”, just as bad news arrived for the bulk of the dot-coms during that bubble, and for the “electronics” firms of the 1920s, and 97% of the maybe 500 auto makers that were busy creating that industry circa 1910.
Matthias
Nov 14 2024 at 3:33am
There was no dot-com bubble. But there sure was a dot-com bust
(There was no dot-com bubble in the sense that if you bought eg all of the stocks of NASDAQ throughout the alleged bubble years, and just held onto them for the long run, say, 20 years, you’d have a decent return on investment.
Which suggests that the ‘bubble’ valuations were actually fairly reasonable on average.)
tpeach
Nov 14 2024 at 12:51am
This could be the case. AI seemed super impressive less than two years ago. But now the wow factor has passed, like most technologies when we get used to them. I’m no longer amazed by it, and sometimes even unimpressed.
However, when I think of the possibilities of AI, I’m reminded of what David Bowie said about the internet in an interview in 1999:
https://bigthink.com/the-future/david-bowie-internet/
David S
Nov 14 2024 at 5:25am
I’ll be more impressed with AI when we (or AI) figure out a way to scale down it’s energy use while continuing to improve the quality of output. When Newton wrote Principia his brain didn’t need a nuclear power plant to sustain his mathematical breakthroughs.
At a more mundane level humans can learn how to operate cars and machinery with a few hours of instruction and then perform with relatively low rates of fatal error for decades. Parts of the AI community seem to be operating like WW1 generals–“we just need a few million more men to achieve a breakthrough!”