Newsletter / Issue No. 69

Image by Ian Lyman/Midjourney.

Newsletter Archive

Thu 30 Apr, 2026
navigation btn

Listen to Our Podcast

Dear Aventine Readers,

As AI dominates headlines and CEOs warn of job elimination, the public's feelings about the technology have hardened into fear and anger, fueling political efforts to curb its growth. The question is whether the AI industry is even listening. This week we dive into what Substack writers have to say about the increasing public unease with the technology and what AI companies should be doing about it. Should they wait to be regulated, or preemptively regulate themselves?

More from Substack:

  • An argument for retiring the term AGI. 
  • The reason education research is weak and sloppy.
  • Musings on recursive self-improvement.
  • And a thought experiment about the future of fertility.
  • Thanks as always for reading,

    Danielle Mattoon
    Executive Director, Aventine

    Subscribe

    Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

    Views from Substack

    Welcome to the AI Backlash

    News reports of a single Molotov cocktail are usually a sign that someone is on the wrong end of a grievance. 

    A few weeks ago that person was OpenAI CEO Sam Altman, and the grievance was the technology he has made almost synonymous with his name: AI. 

    The hurling of the improvised weapon, along with subsequent gunshots fired at Altman’s door, gave an ominous specificity to feelings of fear and anger about artificial intelligence that had previously been simmering in the background. 

    They also served as an opportunity to weigh in — not just with reactions to the attacks, but to the entire enterprise of AI as it exists today. On Substack, where technology’s chattering class sling takes in hopes of shaping the discourse, the conversation evolved from the immediate — "AI populism's warning shots” — from Jasmine Sun, to an admonition to the AI community from Anton Leicht: Come up with regulation you can live with before others come up with regulation you can’t. 

    The posts reflect a sense from both fans and detractors that the technology is careening into an uncertain future with no guardrails, and also that unease about AI is no longer confined to niche debates: It is widespread, deeply emotional and increasingly political. So let's dive in. 

    First, the data

    Newcomer, a Substack focused on the startup and venture capital industry, sifted through 13 polls on public sentiment about AI in the US published since September 2025. From those surveys, it identified some key trends. The headline concern is about jobs: 71 percent of white-collar workers and 73 percent of blue-collar workers think AI will reduce the number of available jobs. More generally, 77 percent of respondents are worried that the technology could be a broader "risk to humanity." And an overwhelming majority — 80 percent — supports government regulation of AI, even if that means slowing down the rate of progress. Notably, despite all the media coverage about the data center buildout, the Americans polled did not have strong opinions about it. AI experts — defined as “AI conference presenters or authors with technical or applied AI expertise” — are more positive about the technology and less concerned about job loss than other respondents. And sentiment has shifted over time: 50 percent of people say they’re more concerned than excited about the increased use of AI in daily life, up from 37 percent in 2021.

    Jasmine Sun, who covers tech culture on her self-titled Substack, pointed out that AI CEOs are themselves partly to blame for the widespread anxiety and fear. Dario Amodei has said that AI could wipe out roughly half of all entry-level white-collar jobs within one to five years. Sam Altman has said that the AI rollout, handled incorrectly, could cause “significant harm to the world.” And Elon Musk put the risk of human annihilation by AI at 20 percent. “Backlash is … the inevitable consequence of AGI-pilling the nation,” she wrote. “Like okay, congrats, now everyone’s woken up to AI and its threat. Do we expect them to stay quiet and meekly accept what’s to come?”

    As to the extreme measures taken by the suspect, Daniel Alejandro Moreno-Gama, some Substackers argue that his actions were the product of the apocalyptic stakes described in the more ardent writings in the AI safety community. “The same people who warned that a superintelligent AI might pursue its goals autonomously through any means necessary have built a social movement with a structural incentive to commit political violence,” wrote Jordan Schachtel, a conservative writer with a Substack called The Dossier. Moreno-Gama was reportedly a reader of the rationalist and AI doomer Eliezer Yudkowsky, author of last year’s If Anyone Builds It, Everyone Dies. Moreno-Gama is also a member of a Discord server run by PauseAI, an organization seeking to halt AI development, which stated that it “unequivocally condemns the attack” on Altman. For his part, Yudkowsky has pushed back at length on X against the idea that describing AI as an existential threat inspires violence. 

    But while the attacks were rightly and strongly condemned by institutions and public figures, Brian Merchant noted on his Substack, Blood in the Machine, that the suspect was “widely cheered online for his actions among countless non-doomers: he was hailed as a folk hero for acting on behalf of class interests.”

    Merchant, a tech journalist who wrote a book (also called Blood in the Machine) drawing a parallel between Big Tech and the 19th- century Luddite resistance to automation, has been tracking the political fallout of AI for years. A week before the attack he published a list of recent initiatives to "ban, reject and shut down” the technology. Some examples: Bernie Sanders and Alexandria Ocasio-Cortez proposing a federal moratorium on data center construction; Wikipedia banning LLM-generated content; Hachette canceling a novel found to contain AI-generated prose. As Merchant sees it, the pushback is becoming more sweeping. "The questions we are litigating now are less 'Is this AI product good or bad,'" he wrote, "and more: 'We have seen what it can do, and do we want AI to exist in this space at all?'"

    Clawing back

    Could the AI industry find a way to turn things around? That’s the focus of a post by Anton Leicht, a researcher focused on the political economy of advanced AI, on his Substack, Threading the Needle.

    His main observation is that AI accelerationists — the people and organizations who want AI development to continue at pace — are losing the political war through strategic miscalculations like vocally objecting to AI regulation and using super PAC spending to influence political campaigns. He argues that it will be better in the long run for AI advocates to be at the table crafting regulation than to stonewall it. When “pro-regulatory sentiment eventually breaks through, whether from irrational anxiety or rational risk intolerance, the actual policy ideas in the drawer will have been crafted without accelerationists in the room,” he writes. “And by 2028, the pro-AI project will be left without allies, outflanked on the left and on the right by the populist backlash it failed to contain.” 

    Four days before that Molotov cocktail was lobbed into Altman’s yard, OpenAI published an Industrial Policy Blueprint proposing "to keep people first” that Leicht sees as a potential first step in the AI industry taking its reputational problem more seriously. What does it have in mind? "Convert efficiency gains from AI into durable improvements in workers' benefits when routine workload declines and operating costs fall," is one example. "Incentivize employers and unions to run time-bound 32-hour/four-day workweek pilots with no loss in pay that hold output and service levels constant," reads another. These are nice ideas, Leicht writes, but he is skeptical that it’s much more than PR. A true pivot, he thinks, would need to be more dramatic: companies proposing broader regulation than they might otherwise like in the hope that they can win over moderate voters and politicians. 

    To get a better perspective on the moment we’re in, Derek Thompson argues in his namesake Substack that we should look to the advent of another transformative technology. “It is not so hard to squint and see artificial intelligence in the story of electricity,” he writes. “From the feuding private inventors that birthed the technology to the inevitable showdown with the federal government over regulation. As with AI, the people who invented the hardware guts of electricity saw their work as a project with world-changing potential.”

    Like electricity, Thompson argues, AI is real and inevitable. And the highly leveraged holding companies, oligopolistic concentration of power and minimal oversight that existed in the electric power industry are mirrored in the AI industry. What might the future hold if these parallels persist? In the early 20th century, significant government intervention was required to bring the electricity sector to order. The same oversight might be needed to tame the public’s anger when it comes to AI. 

    Listen To Our Podcast

    Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.

    Substacks in Brief

    Notable Thoughts from Life Online

    The Future of Fertility, from Uncharted Territories 2

    Falling birth rates and aging populations have fueled widespread concern about an impending demographic — and economic — crunch. But what if that trajectory could be flipped? In this essay, which is really more of a thought experiment, Tomas Pueyo — a prolific futurist who writes broadly about how technology will shape the world — imagines a world in which advances in technology make the future look very different than the forecasts. It is not for the fainthearted. Reproductive medicine could make conception easier and allows parents to optimize embryos for health and intelligence. Artificial wombs could remove the physical burden of pregnancy. Humanoid robots might take on much of the drudgery of childcare. Taken together, these shifts could transform the economics and experience of parenting so that having large families once again becomes the norm. In this scenario, families with 10 or more children are no longer unusual, Pueyo argues, and population collapse gives way to a boom. It’s an extreme and controversial vision, and many of its assumptions are debatable. But it’s an interesting counter to the dominant narrative.

    What will be scarce? from Ghosts of Electricity

    For those anxious about AI-driven job loss, this post offers a glimmer of hope. Economist Alex Imas describes how automation doesn’t necessarily have to eliminate work and could instead push more of the economy toward what he calls the “relational sector,” in which human involvement enhances the value of the goods and services being sold. As people become wealthier, they don’t just acquire more things, they spend more on bespoke luxuries only humans can produce — experiences like massages or fancy dinners, status goods like handmade clothes, guided travel. At the same time, automation will drive down the cost of commodity goods, which has the effect of raising purchasing power. That, in turn, pushes demand even further toward so-called relational work. And because these sectors resist automation by design — removing the human element would undermine their appeal — they become relatively more expensive. Imas suggests that the more efficient AI and machines become at producing things, the more valuable human involvement becomes in other areas of the economy, meaning that there will be plenty of work to be done. It’s not an entirely new idea, but Imas gives it new life. 

    The term “AGI” is almost useless at this point, from Helen Toner

    Maybe you always found the term AGI useless. But as Helen Toner, interim director of strategy at Georgetown’s Center for Security and Emerging Technology and a former OpenAI board member, argues in this post, it used to serve an important purpose. For years, the term "artificial general intelligence” functioned as a convenient shorthand: a rough way of gesturing toward where AI might be headed, even if no one agreed on the exact definition. The ambiguity was fine when AGI was a long way off; as a concept it helped align conversations across research specialties. Toner points out that this is no longer the case. Because some people claim we’ve already achieved AGI and others argue we’re nowhere close, the term has lost its universally understood meaning. This, Toner argues, makes it harder to agree on risks, timelines and what should happen next. Her solution is that we retire the term and instead be explicit about what we mean — whether that’s fully automated AI research, say, or something closer to machine consciousness.

    Musings on Recursive Self-Improvement, from Technologik

    Many people who work in AI are excited about recursive self-improvement, in which AI systems fully automate their own R&D and rapidly improve themselves, triggering a feedback loop that leads to superintelligence that will in turn lead to a transformation of our economy and society. In this post, Séb Krier, who leads frontier AI policy at Google DeepMind, explores the frictions that could slow such explosive takeoffs. The first is the technology itself. Even if models improve at AI research, progress may still be constrained by processing power, training costs, and the need to turn advances into usable products that justify continued investment. The second is far broader: Fast-improving models don’t automatically translate into societal and economic transformation. AI systems still have to operate with existing constraints — regulation, legal systems, infrastructure, corporate processes — all of which evolve far more slowly than software. Even in a scenario in which AI dramatically boosts productivity, he argues, those sources of friction don’t disappear. Krier’s conclusion is that recursive self-improvement is unlikely to produce a sudden, runaway intelligence explosion, but something more akin to a new, evolving industrial revolution.

    Education research is weak and sloppy. Why? from The Argument 

    Here’s a deeper dive into a question we touched on last month: How can societies make good decisions about education policy when the evidence base is unreliable? According to this piece, the answer is: You can’t, because education research is broken. A Nature analysis of 600 social science papers found that education was the least likely field to share underlying data and code with other researchers. And among the subset of education studies in which data could be obtained, none were precisely reproducible, compared with about 54 percent across the social sciences as a whole. Other fields, like psychology and economics, have spent the past decade trying to fix similar problems. They have adopted practices like preregistration, in which researchers specify their methods in advance, and stricter data-sharing requirements to help reduce cherry-picking and improve transparency. Education research hasn’t followed suit. But as the piece argues, an overhaul isn’t likely to happen until the field acknowledges the scale of the problem. 

    How citations ruined science, from David Oks

    A surge in AI-generated research papers has sparked worries about the integrity of science. David Oks, who writes about economics, argues that AI isn’t the root cause, but rather that it’s exacerbating an existing problem. The deeper issue, he argues, is incentives. Modern science runs on citations: They determine careers, shape journal rankings and influence how departments and universities are evaluated. As a result, researchers often choose their course of study with the aim of being cited. That has already had some arguably undesirable consequences: It encourages safe, incremental work over risky ideas that challenge prevailing views; it rewards review articles and commentary that are easy to cite; and it can create feedback loops in which dominant theories become ever more entrenched, crowding out alternatives. Oks points to high-profile cases, such as the amyloid hypothesis in Alzheimer’s research, when the field doubled down on a single thesis that ultimately failed to deliver. AI supercharges all of this by helping to generate passable papers faster than ever. Oks argues that a full rethinking of how scientific work is shared is in order, moving away from traditional papers toward something more like open research repositories, the way it often works in engineering.

    logo

    aventine

    About UsPodcast

    contact

    380 Lafayette St.
    New York, NY 10003
    info@aventine.org

    follow

    sign up for updates

    If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

    © Aventine 2021
    Privacy Policy.