Newsletter Archive
Listen to Our Podcast
Dear Aventine Readers,
This week we spoke with Joel Mokyr, one of the three winners of this year's Nobel Prize in Economics. Mokyr is best known for demonstrating the way scientific insight and technical know-how combined to fuel unprecedented economic expansion during the Industrial Revolution, all sparked by the invention of the steam engine. Will artificial intelligence prompt a similar cycle of prosperity today? Mokyr weighs in on the promise of AI, how we are mis-measuring GDP and the three existential crises he believes pose the greatest threat to humanity. (AI is not on that list, by the way.)
Also in this issue:
Thanks so much for reading. We will be back in two weeks, after a Thanksgiving break.
Danielle Mattoon
Executive Director, Aventine
Subscribe
Subscribe to our newsletter and be kept up to date on upcoming Aventine projects
Joel Mokyr, a 2025 Nobel Laureate in Economics
What History Tells Us About the Promise of AI
This year's Nobel Prize in economic sciences recognized three scholars for describing how new technologies can drive sustained economic growth. Joel Mokyr, an economic historian at Northwestern University, won half of the prize for groundbreaking work showing that a revolving door between abstract scientific knowledge and practical technological know-how can create a virtuous cycle of innovation that leads to economic growth.
Mokyr’s work is wide ranging, but he’s best known for adding a critical new insight about the Industrial Revolution. While there were many contributors to the explosive economic growth that followed the introduction of the steam engine — newly cheap energy, colonial resource extraction, a new breed of entrepreneurship — Mokyr developed a more fundamental explanation. It was, according to his studies, the first example of scientific knowledge combining with technological knowledge to create snowballing innovation that, once started, wouldn’t stop. In the process, it dragged huge numbers of people out of abject poverty.
The other half of the prize was jointly awarded to Philippe Aghion of INSEAD and the London School of Economics, and Peter Howitt of Brown University. They were commended for formalizing the concept of creative destruction, the process in which a new and improved product enters the market and companies selling the old one lose out.
Though based on events of the past, these collective concepts have obvious bearing on our present moment. If AI turns out to be the next general-purpose technology, can it create the kind of self-sustaining growth Mokyr saw propel the Industrial Revolution? Or could this technological wave be different, and end up destroying more jobs than it creates? How much control do we have over the impact it can have on society?
Three weeks after the Nobel Prizes were announced, we spoke with Mokyr about these questions, and what history tells us about the potential trajectory of AI. What follows has been edited for brevity and clarity.
One of the central tenets of your work is the idea that economic growth results from a virtuous feedback loop between scientific knowledge and more practical know-how. Why is that relationship so important?
This is a distinction that I have borrowed from epistemologists, who distinguish between what they call propositional knowledge, roughly speaking what we call science [about how things work] and prescriptive knowledge [roughly speaking what we call technology], about how things are done. By and large, these two sets of knowledge are separate in our minds. So the question is, how do these two sets of knowledge affect one another? And the answer is that each of them stimulates and increases the other, and that gives you what's called a positive feedback loop. What we know is that a positive feedback loop model actually may end up not converging to any equilibrium but may continue to expand, because you get more science that leads to more technology, more technology leads to more science, and that goes on, you know, forever.
Can you give an example of that in action?
I can give you 50 examples, but I'll confine myself to one. Think about how the steam engine came about. It depended on three major insights that the scientific revolution of the 17th century came up with: the understanding of the characteristics and properties of steam; the insight that we are living at the bottom of an atmosphere; and the idea that a vacuum can exist. These are basically the background of understanding how you could move a piston in a cylinder up and down [in a steam engine]. But science owes more to the steam engine than the steam engine owes to science, because [by studying the steam engine] people like Rudolf Clausius, Sadi Carnot and James Prescott Joule in the 19th century finally nailed the science of thermodynamics. Then a man called Rudolph Diesel, who understood thermodynamics, took that insight and built the technology that is the diesel engine. So you see this going back and forth, and the end result is the extremely efficient internal combustion engine, which created a major revolution in economic life and human society called the automobile. Without the give and take between prescriptive and propositional knowledge, you would not have that.
You’ve written a lot about how culture is important to the creation and diffusion of the knowledge we’ve just discussed. Can you describe the ideal cultural environment you have in mind?
Not in less than an hour. But let me give you an example that I have been arguing about for a very long time. One of the regularities of human history is that most societies have an inordinate amount of respect for the knowledge of people living in the past. You think about the influence that Aristotle had on medieval science, the way that in Muslim society all the wisdom is written in the Quran, the writings of Confucius and their importance to the Song Dynasty. The knowledge of the past has always been a source of great respect. At some point in Europe, the culture changed and people started becoming skeptical and dubious about the writings, not just of Aristotle, but the whole range of Greek and Roman scientists, everyone. All of those people are basically shown to be wrong, and people become very disrespectful. And that's a major, major cultural change. It is the kind of phenomenon we live in today, but it is actually quite exceptional in human history. In order to get progress and believe in progress, you need to be arrogant in the sense that you say, “The way my grandfather did it is completely mistaken. We could do better.”
You've emphasized the importance of a culture that respects and funds creation and invention. How might today’s skepticism about the value of scientific knowledge affect future innovation and growth?
There's always a minority of people who feel that science is dangerous, and innovation and invention pose some kind of a hazard. That’s nothing new. When we tell the history of science, we talk about the successes and people rarely mention the fact that for every great scientist, there's a bunch of people who fight back. This takes place in something which I call the market for ideas, and basically what happens is the people who are opposed to science and the people who are in favor of it are trying to convince the public. The best argument that the people who support science can make is that it works.
Funding for basic scientific research in the US has seen major cuts this year. Could that undermine the positive feedback loops that your work has described?
My view is that it will drive the cutting edge of innovation to other countries, primarily China, but I imagine Europe as well and, you know, possibly other areas of the world. As long as the world is not a single entity in which some fool can approach the scientific budget with a chainsaw, what will happen is [innovation] will just move from one place to another. The overall rate may slow down a little bit while you make the transition, but it isn't going to be stopped, in part because we can't afford it to. We can't afford for it to slow down because we're looking at a bunch of existential problems, above all climate change and demographic change, which need to be attacked with technological advances.
We’ve lived through several decades of phenomenal technological advance, and at the same time seen growth stagnate. How do you explain that?
Well, I have my private explanation, which is quite controversial. I actually think that the way we measure economic growth today was designed for an economy that produced mostly wheat and steel — so you can measure it, and it goes through the market, and you have prices, and that's how people who design national income accounting designed national income. Then economic growth is basically just the growth of national income or GDP per capita over time. But in a knowledge economy, that measure doesn't measure economic performance. Economic performance consists of a great deal of goods that tend to be very high fixed costs and extremely low marginal cost. Think of WhatsApp, think of GPS, think of Twitter, if you like that kind of thing. These technologies have given people a great deal of increasing utility, but it's not in the national income account, So I think, I think we're mismeasuring GDP.
You’ve studied general purpose technologies like steam power and electricity. We may now be living through the emergence of another in the form of artificial intelligence. How does AI rank as a world-changing technology compared to the technologies that have preceded it?
Given AI is just in its early stages, it would be rash and irresponsible of me to say how it will change things. It’s like asking Thomas Newcomen [the inventor of the steam engine] how he thought that his invention would change the world. I don't know what AI is going to do. I tend to be, on the whole, very optimistic and bullish about it. There's one thing that it can do which I think is not emphasized enough. In many fields in our world [such as education and medicine], we have, almost by necessity, to take a one-size-fits-all approach to delivering certain services. But if you can look at each case and fine-tune the service you're delivering to that person, you are changing human life enormously. I think that's huge.
One school of thought argues that AI will deliver a world of radical abundance. Others forecast a scenario of only modest increases in growth. What does history tell us about the significance of AI’s potential impact on the economy? And what does history tell us about how long it will take before we see that impact ?
Well, my take on that is maybe a bit different from what most people say. I see the human race facing a number of extremely dangerous existential problems, above all climate change, how governments are spending more than they're taking in and building a debt crisis that will crush society, and how that's compounded by demographic change. I'm really worried about these things. My great hope is that, precisely because artificial intelligence is a general purpose technology, we will be able to deploy it in order to prevent the worst of these things from happening. It's not going to create abundance, but if it keeps us where we are instead of society deteriorating, I'll be quite happy with it.
Your work complements the economic idea of “creative destruction” in which disruption ultimately leads to greater productivity, innovation and job creation. With AI, a lot of people are very nervous about how the technology will destroy jobs without creating new ones. Is this time going to be different?
I have an article published a decade ago called, “Is this time different?” And the answer is basically: No. The real problem that the world is facing currently, and will face increasingly, is not the scarcity of jobs but the scarcity of manpower. Manpower shortages are plaguing every economy these days. There aren't enough people because of aging, as well as other norms that have emerged [such as retirement ages, shorter working hours and more vacation time] which mean the total number of hours that anybody works over a lifetime has fallen enormously in the last 100 years. You add that to the demographic revolution, and you realize it's manpower that is going to be in shortage. That problem was resolved in the past 30 or 40 years by the ever-growing ratio of female labor force participation, but we've exhausted that. The other footnote I'll make to that is that we are continuously creating new jobs and new tasks that nobody could have dreamed about before. If you had told my grandmother in 1910 that some of her great-grandchildren would have jobs like video game designer and cyber security expert, she wouldn't have had the foggiest clue what you were talking about. In the year 2100, there will be jobs that we cannot imagine. So I am not worried about jobs, I'm worried about labor shortage.
Proponents of AI see it as a means of supercharging the economy and solving hard problems like climate change and diseases. Skeptics think it will decimate jobs and exacerbate inequality. What does history teach us about how much we can affect the trajectory of a technology like AI?
There's a deep historical question about who has agency over the adoption of technology, and typically this has been something that the market dictated. So if I come up with a better mousetrap, I can outbid whoever is making the previous mousetrap, because my mousetrap is either cheaper or it catches mice better, or some combination of the two. In theory, economics would predict that that is actually the best way of doing it. But here's the kicker: Because it's new, we don't quite know what it will do. And in many, many cases, when a new invention comes online, it will do something that nobody had expected. You see that with fossil fuels, with leaded gasoline, with asbestos, with CFCs in spray cans. These are the kinds of problems that technological progress runs into and all it tells you is that you need better science and more technology to mitigate and solve the damage that your previous innovation caused, and that goes on and on and on. That's what we are. That's who we are. The alternative of not having technological progress would take us back to the Middle Ages. And nobody would like that.
Listen To Our Podcast
Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.
Magazine and Journal Articles Worthy of Your Time
Can AI capture the mind-boggling complexity of a human cell? from Science
2,800 words, or about 11 minutes
A single human cell is fantastically complicated: trillions of molecules and 42 million proteins constantly interacting and adapting to their environment. For decades, biologists have tried to describe that chaos with mathematical models that track how small changes ripple through the cell. Increasingly, artificial intelligence looks set to offer a more powerful simulation. Instead of being programmed with biological rules, new AI models learn how cells behave by being trained on vast datasets — such as a Chan Zuckerberg Initiative database of gene activity across 35 million human and mouse cells. These systems can already classify cell types, even from species they weren’t trained on, and predict how cells might respond when certain genes are switched off. Early experiments using CRISPR to test those predictions suggest the models are surprisingly accurate. For now, their power is limited by the patchy and inconsistent data available: There’s no single standard or centralized store for cellular information to make use of. But if that infrastructure improves, AI could one day simulate how drugs or gene edits affect patients, so that many experiments can start on a computer rather than in a lab.
The Ark-Builders Saving Fragile Bits Of Our World, from Noema
4,500 words, or about 18 minutes
Drill 300 feet into a glacier, and you can see back in time to the days of Jesus Christ — because trapped in the ice at the bottom are trace elements of ancient atmospheres that let scientists read the story of our planet. One the way down there, you can spot signs of today's microplastic pollution, nuclear testing and the Industrial Revolution. But as glaciers melt at alarming rates, those frozen records — and the insights they hold — are disappearing. That is just one of several fascinating examples from this story about scientists and archivists around the world who are racing to preserve these and other fragile traces of human and natural history. The cells from endangered species, seeds of rare plants and languages on the brink of extinction are all being carefully collected and logged so that our understanding of the past isn’t accidentally lost forever. In the process, it raises a big question: How do you decide what to preserve and remember?
All of My Employees Are AI Agents, and So Are My Executives, from Wired
2,700 words, or about 11 minutes
Sam Altman believes the first AI-powered, one-person billion-dollar company may already be in development. But if this story is any indication, building it will be a nightmare. The author of this piece, Evan Ratliff, started a company staffed entirely by AI: five synthetic employees, all powered by large language models and able to communicate via email, Slack, text and phone. They could collaborate, had a kind of working memory (in the form of a Google Doc), and took direction from their human boss, who asked them to build a business. They did, sort of, creating a service called Sloth Surf that ostensibly enables people to outsource their online procrastination. Along the way there was chaos: lies (“Kyle claimed we’d raised a seven-figure friends-and-family investment round. If only, Kyle.”), mistakes (“Ash would mention user testing, add the idea of user testing to his memory, and then subsequently believe we had in fact done user testing.”), and long bouts of pointless busywork (like spending two hours planning a team offsite the bots couldn’t even attend). This is all slightly silly, of course — part joke, part experiment. But it’s also a revealing glimpse at the limits of today’s large language models, and what happens when you ask AI to do work that neither it, nor you, fully understand.