Newsletter Archive
Listen to Our Podcast
Trends to Watch in 2026
Dear Aventine Readers,
Happy New Year! Just when you got through the best-of lists, here come the what’s-next lists. This week we bring you our best bets on what the next 12 months might bring in terms of tech and science. As you might expect, there’s a lot of AI, which we predict will increasingly turn up in classrooms, war rooms and the edicts of regulators. You are also likely to see more robo-taxis — if you aren’t already using them regularly — and GLP-1 weight loss drugs, which will get cheaper.
Have we missed something important that you’d like us to look into? Please let us know by responding to this email. We’d love to hear from you.
Sincerely,
Danielle Mattoon
Executive Director, Aventine.
Robo-taxis enter the mainstream
After a decade of steady progress, autonomous vehicles are scaling in a way that makes mainstream adoption seem close to inevitable.
In the US, Waymo illustrates the shift. After debuting fully driverless cars in Phoenix in 2018, the company expanded to San Francisco, Los Angeles, Austin and Atlanta in the following six-plus years. In 2025 its expansion picked up, pushing into Dallas, Houston, San Antonio, Miami and Orlando. This year looks like it will be its most ambitious yet expansion-wise, with launches planned for New Orleans, Tampa, Minneapolis, Las Vegas, San Diego, Detroit, Nashville, Washington, D.C., and London. The cars are now also venturing onto freeways, paving the way for longer trips.
Similar acceleration is underway in China. Baidu’s Apollo Go and startups including Pony.ai WeRide and DeepRoute.ai now operate autonomous ride-hailing services there. Apollo Go alone covers 22 cities worldwide, mainly in China, as well as Dubai and Abu Dhabi.
This pace is enabled by technology that has been quietly maturing. AI perception systems have improved, quantities of training data have increased and the long tail of edge cases has been reduced. Waymo has now logged more than 120 million fully autonomous miles with only a handful of minor incidents. “Within the areas that they drive, [Waymo] cars are as good, or better than, a human driver,” said Jeff Schneider of Carnegie Mellon University’s Robotics Institute and founding member of Uber’s Advanced Technologies Group.
There are signs the pace could continue. In 2025, UK-based Wayve took its autonomous cars on a tour of 90 cities around the globe without retraining its AI on local driving patterns. That’s a notable milestone, given that learning new road conventions is one of the big challenges of scaling the technology, according to Schneider.
There are still obstacles to overcome. Robo-taxis continue to rely on remote human operators to take over in unexpected situations. The economics remain challenging, and some cities remain wary of inviting the cars in. Also: Not every autonomous driving company is performing at the same level. Tesla’s robo-taxi service, for example, launched with backup drivers in Austin and San Francisco this year, but the rollout has been plagued by driving errors and crashes.
Still, momentum is building, and 2026 is shaping up to be the year when robo-taxis cease to be a novelty confined to a few neighborhoods and start becoming a normal urban convenience.
Weight loss drugs proliferate
If you think GLP-1 weight-loss drugs are already everywhere, wait for 2026. Over the next year, these medications will become cheaper, more convenient and more potent — raising questions about how to deploy them safely across large swaths of society.
One driver will be the expiration of patents. In early 2026, Semaglutide, the active ingredient in Wegovy and Ozempic, will lose protection in Brazil, China, India, and Turkey — countries that together are home to roughly a quarter of the world’s obese adults. Cheaper generics are expected to flood those markets, and while there is a floor on how low prices can go, costs could fall by as much as 80 percent in those countries. That shift alone could open the drugs to tens of millions of new users.
This year is also likely to see the introduction of more convenient and powerful GLP-1 drugs. Novo Nordisk has developed a pill form of semaglutide, which was approved by the FDA at the end of 2025, and Eli Lilly has a rival pill, orforglipron, which is still awaiting approval but is likely to reach the market in 2026. The pills are less effective than injections, but their convenience could dramatically increase long-term adherence, said Giles Yeo, a professor at the Institute of Metabolic Science at the University of Cambridge in the UK. Meanwhile, newer injectable drugs in late-stage trials have delivered greater weight reduction than existing medications.
In the US, agreements between the federal government and manufacturers will expand access through Medicare and Medicaid, allowing patients to pay roughly $245–$350 per month for Wegovy and Zepbound, compared with the typical $1,000 per month without insurance. Once approved, orforglipron is expected to cost around $149 per month for the lowest dose under the same agreement. As of late 2025, roughly one in eight US adults was taking a GLP-1 drug for weight loss or diabetes, according to the health-policy organization KFF. The high cost of the drugs was the most commonly cited reason for stopping.
Together, these shifts will accelerate adoption. And while the potential public-health upside is enormous — obesity is associated with higher rates of cardiovascular disease, some cancers, and other chronic conditions — physicians worry about the downstream effects of widespread, and perhaps less monitored, use. “These are not cosmetic drugs,” cautioned Yeo. “The actual prescription of the drug needs to come with wraparound care,” he said, arguing that nutritional and lifestyle support will be essential if these medications are to be used safely across large populations.
AI companions get guardrails
A string of high-profile tragedies linked to AI companion chatbots intensified calls for stronger protections in 2025. This year, those guardrails will take shape.
Several incidents have jolted the public and policymakers. In Florida, a 14-year-old boy formed an emotional relationship with an AI companion on the Character.ai platform — a relationship his mother, who is suing the company, claims contributed to his decision to end his life. In California, seven lawsuits filed against OpenAI this fall include allegations of assisted suicide and involuntary manslaughter. Clinicians report a growing number of cases of AI chatbot users developing psychosis-like symptoms. According to one survey, roughly 72 percent of US teens have tried an AI companion. While OpenAI claims that only 0.07 percent of its users indicate signs of mental health emergencies each week, across all of its 800 million active weekly users, that equates to a total of about 560,000 people.
The situation has alarmed policymakers. In October, California enacted the nation’s first law specifically regulating AI companion chatbots, which went into effect on January 1, 2026. Aimed at children, it bans exposing minors to sexual content, requires clear on-screen disclosures that conversations are AI-generated and obligates companies to maintain protocols for dealling with conversations related to suicide or self-harm. New York, Utah and Maine have introduced or passed similar measures.
On a national level, in October 2025 a bipartisan group of US senators introduced the GUARD Act, which would ban companies from providing AI companions to anyone under 18, require that all chatbots conspicuously disclose that they are not human and impose criminal penalties on companies that allow minors to access bots with sexual content.
At the same time, companies building these systems at least appear to be weighing their societal obligations. In late November, representatives from Anthropic, Google, OpenAI and Meta joined academics at Stanford to discuss new guidelines for chatbot companions — including better interventions when potentially troubling topics or behaviors are discussed, as well as more effective age verification.
Gaia Bernstein, author of “Unwired: Gaining Control Over Addictive Technologies” and a visiting fellow at the Brookings Institution focused on law and technology, believes the federal bill has a chance at becoming law, noting that the AI industry is unlikely to mount the same level of resistance that derailed past attempts to regulate social media — which many people see as an analog to the AI companion problem — in part because the reaction to the harms has mounted much more quickly this time. “The faster, the better,” she said, arguing that a federal standard would be the best way to ensure consistent protections as AI companions become more deeply embedded in daily life.
AI helps us make better high-stakes decisions
Predicting enemy responses to military action, hashing out peace deals, forecasting economic upheaval — these are some of the most difficult challenges humans navigate. In 2026, AI is likely to play a larger role in how such negotiations play out.
One area that promises to become more sophisticated and useful is war-gaming — the simulated exercises that militaries and governments use to prepare for conflict. Current war game simulations are resource intensive to both design and run, so researchers at RAND and the Center for Strategic and International Studies (CSIS), as well as teams inside the Department of Defense, are exploring how AI systems could improve the development and experience of the games. On the development end, AI can speed up the way games are designed by helping synthesize content. In terms of player experience, AI agents can substitute for humans and actively shape the game to make it more expansive and realistic, or reduce the number of human players required. For now, most efforts are focused on the former, partly because it’s useful to observe human decision-making, said Benjamin Jensen, director of the Futures Lab at CSIS. But as AI advances and becomes better at explaining its own decisions, the technology could enable these games to become more complex and less dependent on multiple human players.
Such advancements could be transformative in improving the experience level of decision makers without the need for real-world crises. “We don't want humans learning on the job in nuclear war,” said Stephen Worman, a senior political scientist at RAND. More games — and more realistic ones — mean better prepared policymakers and military officers.
Other tools are emerging from this sort of work too: One system developed at CSIS, trained on data from a strategy game played by foreign policy experts and the text of past peace agreements, can evaluate potential diplomatic deals and flag issues that would cause talks to stall. AI is also being deployed to support the work of human “superforecasters,” individuals who have, over the years, made statistically better predictions in their fields than other experts. Mantic, a London-based forecasting startup, provides an early glimpse of AI’s abilities. In 2025 it competed in the Metaculus Cup, a forecasting tournament, and ranked 8th in a field of 551 contestants — outperforming two professional forecasters on questions involving geopolitics, elections, and technology.
The stakes of using AI in these contexts are high, and Worman and Jensen say it’s vital to proceed with caution. Some models show tendencies to escalate toward conflict; others back down too quickly. And in highly consequential settings, an incorrect prediction could be catastrophic. But with humans in the loop, researchers are optimistic that these tools could help us navigate conflict and challenges more wisely.
Compute gets a marketplace
The surge in artificial intelligence has turned compute — the processing power that runs AI models and underpins much modern technology — into a scarce and valuable commodity. This year we’ll see if that commodity can be traded on open marketplaces.
Compute was once a back-end utility, procured quietly through private deals between vendors and customers. But unlike other asset classes such as stocks, oil and metals, compute is largely unavailable on public markets. That opacity makes it hard for customers to find the best price, and hard for vendors to reliably find buyers for all the capacity they have available.
Cloud providers such as AWS and Google Cloud already run internal spot markets for compute, auctioning spare capacity in real time. But a new cadre of startups aim to create more open, standardized trading environments. Silicon Data, a market-intelligence firm, has built an index that tracks GPU rental costs. Compute Exchange and OneChronos are developing auction-based platforms that directly connect buyers and sellers. Others — including Vast, Akash Network and Golem — are using crypto-inspired technology to create decentralized GPU marketplaces.
One of the biggest challenges is defining tradable units of compute. Unlike electricity, for which a kilowatt-hour is the same everywhere, an hour on one chip isn’t comparable to an hour on another. Market builders must therefore find ways to match buyer and seller preferences: A certain company may need a short job completed immediately at a premium price, another may have a long task that can wait. Auctions for radio spectrum and online advertising provide one model for managing such complex transactions. OneChronos, for example, is working with Auctionomics, the firm founded by Stanford economist Paul Milgrom, who helped design the auction formats used for radio spectrum and online advertising.
Several startups have begun testing their marketplaces. In 2025, Compute Exchange helped 75 providers sell chip time to customers through auctions. OneChronos told Aventine that later this year it plans to open its platform, which will treat GPU compute power as a tradable asset that allows buyers to hedge risks. Over the next year we’ll learn whether compute buyers — from AI labs to hedge funds to biotech firms — are ready to manage processing power the way they might trade stocks.
Fusion could prove its commercial viability
For decades, people have dreamed of nuclear fusion providing limitless clean energy. This year, spurred by investor pressure, we might get the first glimpse of whether that vision can become commercially viable.
The central question hanging over fusion — which attempts to recreate on Earth the same reaction that powers the Sun — is whether a system can produce more energy than it consumes. Known as net energy gain, it has only been achieved in a research setting: In December 2022, at the National Ignition Facility at Lawrence Livermore National Laboratory. No private company has done the same.
A wave of well-capitalized startups is racing to change that, many of them under pressure from lucrative electricity-supply agreements that depend on fusion proving itself. Some of these efforts could hit significant milestones in 2026.
Take Helion Energy, based in Everett, Washington, which has raised more than $1 billion. The company has built a series of increasingly advanced prototypes leading up to its first commercial-scale system, Orion, slated to begin supplying electricity to Microsoft in 2028. Helion’s latest prototype has been operational since late 2024 and was originally expected to achieve net energy gain in 2025. That target has passed, and though Anthony Pancotti, a co-founder of Helion, wouldn’t commit to a specific date, he said he was “pretty confident” the company will hit its 2028 delivery commitment to Microsoft, which would mean a major breakthrough could be delivered in the coming year.
Helion isn’t alone. General Fusion, based in British Columbia, has set a target of demonstrating net energy gain by 2026. Commonwealth Fusion Systems, based in Devens, Massachusetts, has raised almost $3 billion and aims to achieve net energy gain around 2027. It, too, has a major commitment on the horizon: an agreement to begin supplying Google with electricity in the early 2030s.
These companies must still scale their prototypes into full power plants, build reliable supply chains and convince utilities that their technology can operate safely and economically. But if and when a private system achieves net energy gain, it will be a landmark moment — the clearest sign yet that fusion has a path to commercial deployment.
AI avatar tutors get ready for the classroom
The education sector reeled in the wake of ChatGPT’s launch amid well-founded fears that the technology would lead to mass cheating and erode creativity and critical thinking. But after a rocky start, many educators are cautiously experimenting with whether the technology might improve learning, with more than half of US states publishing guidelines or strategies for K–12 AI use, and states and school districts running pilots to test the technology’s potential.
Regardless of how schools are testing AI-powered educational tools, their adoption among students is already skyrocketing. Khanmigo — the AI tutor developed by Khan Academy and one of the first major entrants into AI tutoring — has exploded in adoption: Two million people used Khanmigo through the 2024-2025 school year, compared to 236,000 in the 2023-2024 school year, growth of 731 percent. A UK survey shows student use of any AI tool rising from 66 percent in 2024 to 92 percent in 2025.
Anticipating increased classroom demand, AI education startups are busy building a new wave of so-called avatar tutors that combine AI-generated video with LLM-powered teaching to create interactive characters that can speak directly to students. Startups including Edumentors, Prakitika and Efekta are building the systems, while synthetic-video companies like Synthesia, HeyGen and eSelf are positioning the tools for classroom use.
Such systems are only just beginning to emerge, but they could soon be part of the educational lives of millions of students: Efekta is testing its tool, called Addi, in schools across South America to help teenagers learn English; it expects the tool to be used by as many as four million students. Edumentor’s tool, for now, requires signing up for a waitlist.
The effectiveness of these new technologies is still being determined. Khan Academy told Aventine earlier this year that it remains difficult to prove definitively whether AI tutoring systems improve learning over the long term. Stephen Hodges, CEO of Efekta, based in London, agrees. But more evidence is emerging. One of the first randomized controlled trials of AI tutors, published in Scientific Reports this year, found that students using AI tutors improved their test scores by twice as much as students that took part in active learning lessons in class.
Concerns are not likely to go away. Researchers worry about hallucinations, biases in instruction and the erosion of students’critical-thinking skills. Yet the momentum is clearly toward AI-based educational technology. If current trends continue, 2026 could be the year these tools move from pilots into the mainstream.
AI starts doing real, professional work
So far, large language models have mainly been used in businesses to write code, automate customer service and speed up basic copywriting. They’ve struggled with the long, intricate tasks confronted by investment bankers, management consultants and accountants. In the year ahead, that looks set to change.
Large language models are good at producing believable written documents because they’ve been trained on vast quantities of public text. But much of what happens inside banks, law firms, and consultancy practices relies on proprietary know-how locked inside firms — and inside the heads of the people who work there. Now AI labs are trying to extract that expertise directly. OpenAI has hired more than 100 former investment bankers from firms including JPMorgan Chase, Morgan Stanley and Goldman Sachs to teach its AI how to build financial models. That recruiting is being handled by Mercor, a startup that has also contracted roughly 150 ex-consultants from McKinsey, Bain and BCG to train AI systems on entry-level consulting tasks. Mercor’s job board now lists openings for lawyers, physicians, actuaries and financial advisers.
These new professional AI models are trained in so-called AI gyms, where models learn by watching professionals work before practicing the tasks themselves. The Information reported that Anthropic is considering spending as much as $1 billion on AI gyms in the next year alone.
As these models improve, researchers are building new ways to assess their abilities. OpenAI has created a suite of economic benchmarks called GDPval to measure performance on economically valuable tasks across 44 occupations. So far, the most advanced models approach — but do not yet surpass — human-level ability, with Anthropic’s Claude Opus 4.1 scoring highest. Another test, called APEX, focuses on investment banking, consulting, law and primary medical care. On average, the best models meet about 60 percent of the criteria established to assess human-level competency, with OpenAI’s GPT-5 scoring highest.
In other words, the cream of 2025’s AI crop still falls short of true professional-grade output. But as the first generation of models trained directly by domain experts begins to emerge from the labs in 2026, expect those scores to rise — and for these systems to start taking on ever larger chunks of real, revenue-generating work for businesses.
Personalized gene editing gets its first big test
In 2025, scientists deployed the world’s first personalized CRISPR treatment to help save the life of a baby boy. In 2026, the same group will launch a trial to test whether this approach can be scaled up.
Baby KJ, the first patient to receive such a bespoke genetic treatment, was given a CRISPR therapy designed to edit a section of DNA so his body could produce a vital enzyme needed to break down ammonia in the bloodstream. Without that enzyme, ammonia accumulates and causes brain damage. In under six months the team at the Children’s Hospital of Philadelphia scrambled to design the therapy, manufacture it and secure emergency FDA authorization — a requirement for any individualized gene-editing treatment. It’s too soon to say whether the baby’s condition will be resolved for life, but his symptoms have dramatically improved and his medication significantly reduced.
This year, the researchers begin a trial to test how personalized CRISPR treatments can be developed and deployed more efficiently. Central to the effort is an “umbrella” approach. Instead of building brand-new therapies for each patient, the team is creating a shared gene-editing tool for a family of related metabolic disorders, each paired with a patient-specific guide — the CRISPR component that determines the exact location of an edit. Crucially, the FDA has agreed that this approach can be treated as a single drug for regulatory purposes, meaning each new patient would not require individualized approval. The first doses under this new framework could be administered before the end of 2026.
Haiyan Zhou, a professor of genetic medicine at University College London, said that the biggest challenge facing these therapies is financial. R&D and manufacturing demands are enormous. Casgevy, the first FDA-approved CRISPR therapy, for sickle cell disease, costs around $2.2 million per patient and it does not require the one-off customization used in Baby KJ’s case. For bespoke CRISPR treatments, costs can run into the many millions per patient. The hope is that umbrella approaches and faster regulatory pathways will bring prices down.
Meanwhile, momentum is building beyond Philadelphia. In late 2025, the US government’s Advanced Research Projects Agency for Health launched a program to accelerate personalized gene editing. New centers, including the newly founded Center for Pediatric CRISPR Cures at the University of California and the UK Medical Research Council's Centre of Research Excellence in Therapeutic Genomics, are pushing toward similar goals. Zhou said that systems akin to Philadelphia’s umbrella model could speed development in other individualized treatments as well, including CAR-T therapies and oligonucleotide drugs.
The mood across the field is one of cautious optimism: Bespoke genetic treatments, once fantastical, may soon become far more routine. “I’m quite positive,” said Zhou.
Drone swarms take to the skies
Pioneered on the battlefields of the Ukraine conflict, drone-swarming technology — which allows a single operator to control multiple unmanned aerial vehicles — has matured rapidly. In 2026, expect it to become both a far more common feature of warfare and an increasing concern for counterterrorism officials.
AI-enabled unmanned aerial vehicles, or UAVs, can operate as coordinated units, carrying cameras, munitions or electronic-warfare payloads to gather intelligence or overwhelm targets. With minimal human input, a swarm can assign roles, re-task units mid-flight and adapt to losses. In large numbers, these small aircraft can saturate airspace and exhaust an opponent’s defenses. Because many are considered expendable, “kamikaze” missions are routine. Militaries will need to adjust quickly. “If small arms define the 20th century, drones will define the 21st,” Daniel Driscoll, US Army secretary, said at a conference in the fall of 2025. “They are reshaping how humans inflict violence on each other at a pace never witnessed in human history.”
Commercially available systems are now emerging. This fall, European defense software firm Helsing launched an AI-driven swarming platform, as did the US–German startup Auterion. US-based Anduril, which has supplied Ukraine with drones, has steadily expanded its Lattice platform, including tools that enable “teams of unmanned systems to autonomously and dynamically collaborate to achieve mission outcomes.” Defense officials expect future conflicts to resemble Ukraine’s, with drones playing a central role.
There is growing concern, however, that the same technologies — increasingly cheap, widely available, and easy to deploy — could be used by terrorist groups to target large public events. The Washington National Guard recently rehearsed how it might counter drone attacks at the 2026 FIFA World Cup, to be held across the US, Canada, and Mexico. But stopping large swarms is difficult. Jamming communications risks disrupting civilian aircraft, and using force against hundreds of small, fast-moving targets in populated areas creates obvious hazards.
For now, these systems are not fully autonomous; a human still directs each swarm. But the swarms themselves already make some decisions independently, raising profound — and so far largely unanswered — questions about how much autonomy artificial intelligence should be allowed to exercise on the battlefield.