Newsletter / Issue No. 26

Image by Ian Lyman/Midjourney.

Newsletter Archive

31 Jan, 2025
navigation btn

Listen to Our Podcast

Dear Aventine Readers, 

For years, the idea of a quantum computer was theoretical: Could a reliable computer be built to reflect and understand the laws of nature? And once built, could it help solve some of nature's most pressing challenges? 

The answer to the first question became clear last year: Yes. As for the answer to the second question, the consensus in the scientific community is that it's only a matter of time and engineering effort before the abilities and power of quantum computing are being deployed to develop more efficient batteries and more effective drugs, and to unlock the secrets of nuclear fusion. 

In this issue we look at what needs to happen to get there. It won't be easy, but the experts we spoke to believe everything is in the realm of the doable. 

Also in this issue: 

  • China’s DeepSeek could turn the AI race on its head
  • The push to capture CO₂ with rocks and water
  • What happens next with the H5N1 bird flu?
  • As always, thanks for reading. 

    Danielle Mattoon
    Executive Director, Aventine

    The Big Idea

    Quantum Computing’s Path to Utility

    In 2024, a series of advances pushed quantum computing toward a new frontier. Researchers finally demonstrated that one of the major stumbling blocks facing the technology — its susceptibility to errors — was surmountable, paving the way for devices that will one day revolutionize fields such as drug discovery, battery science and nuclear fusion.

    “We're switching now to engineering roadmaps,” said Carmen Palacios-Berraquero, founder and CEO of the quantum computing company, Nu Quantum. “It's where the industry is at.”

    The shift raises important questions: What exactly are the engineering advances needed to make the technology useful, how long will it take to develop them, and what will quantum computers be capable of when such advances are in place? Aventine spoke with experts in academia, at startups and at large technology companies to understand what happens next. 

    While the consensus is clearly that a lot of work is still to come, there’s also a sense that the technology has made a fundamental transition: Once an esoteric theoretical physics problem, quantum computing is now an engineering challenge that simply requires hard work, patience and money to turn it into a practical reality.

    What will quantum computers be able to do? 

    Let’s be clear: Quantum computing will never replace classical computing. It is simply not well suited to perform many kinds of calculations that are the bread-and-butter work of classical computers. For plenty of other applications quantum computing will be prohibitively expensive. 

    Yet there are problems that quantum computing is uniquely able to solve. Quantum mechanics, on which quantum computing is based, explains on a basic level the behavior of atoms and molecules and how they interact with each other — behavior based not on absolutes, but on probabilities. In order to measure, for example, the position of an electron that is orbiting within an atom, you must understand that the electron is not in one place; it has some probability of being everywhere. Quantum mechanics, then, can describe at a fundamental level how things like pharmaceutical drugs and batteries work and also how biological processes such as photosynthesis occur. This is what prompted the famous physicist Richard Feynman — arguably the first proponent of building a quantum computer — to say that, “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical.”

    Quantum computing is built on this concept, which means that it works in a fundamentally different way from classical computing.

    In a classical computer, information is stored in bits, which use voltages to represent either zero or one. In a quantum computer, information is stored in qubits, which use quantum properties — the spin of an electron, say, or the charge on a microscopic superconductor — to represent the probability of it being zero or one. This is sometimes explained as qubits taking all the possible states between zero and one at the same time. In a quantum computer, multiple qubits interact with each other in order to perform calculations. A good example of this might be calculating the energy levels of a molecule — information that can tell us all kinds of things about how the molecule behaves and reacts with other chemicals — by having each qubit represent the energies of each of the individual atoms that make up that molecule. Being able to perform these kinds of calculations will have “enormous implications for things like fusion energy, battery power, pharmaceuticals,” said Charina Chou, chief operating office of Google Quantum AI, the team building the company’s quantum computers. 

    There are other compelling applications, too, which include some types of machine learning, accelerating the ability to determine the best solution to a problem for which there are many possible answers (a field known as optimization), and finding the prime factors of large numbers, which is the mathematical operation that underpins the encryption protocols that keep the internet secure. (More on this last one shortly.) 

    The consensus among the experts Aventine spoke with was that quantum processors will, over time, simply become part of a high-performance computing stack that scientists, engineers and software developers can access to run their code. IBM’s quantum roadmap suggests that could happen within a decade, though that timeline could expand or contract depending how quickly progress is made. At that point, just as artificial intelligence workloads run more efficiently on specific chips known as GPUs, so some problems will be solved more efficiently on quantum processors, or QPUs. The end goal, said Ken Brown, an engineering professor at Duke University focused on quantum computing, is that users simply won't know that part of their code is being performed by a quantum processor when they send it to the cloud. In other words, you can think of quantum computers as becoming highly task-specific resources that will be tapped when required in order to run particular pieces of code more quickly.

    What’s been holding quantum computing back?

    Even as recently as twelve months ago, all this was only a distant hope because quantum computers were too error-prone to be considered useful. 

    To solve problems with quantum computing the first thing you need is a computer with enough qubits — a number that will vary depending on the calculation. To study a single molecule, for instance, “the number of quantum elements you might want to study … is roughly the number of qubits you need,” said Jerry Chow, director of quantum infrastructure at IBM. For medium-size or larger molecules, that could mean at least a hundred qubits, said Palacios-Berraquero. Second, you need those qubits to work properly for long enough that the calculation can be performed. The difficulty here is that the quantum states that represent information inside a qubit are delicate and easily disrupted by things like fluctuations in temperature or electromagnetic interference from electronic circuits. Such disruptions can introduce errors into calculations. And while small errors can be mitigated, they can compound if many qubits are added together or allowed to run long enough to perform more complex calculations. 

    So in recent years, there has been an effort to develop an approach known as quantum error correction. This is built around the idea of developing error-free qubits called logical qubits, each of which is composed of multiple individual qubits. These individual qubits work together, some holding information and others checking that the information is correct, to reduce errors. If the individual qubits are highly prone to errors, this won’t work; adding more qubits will just make it more likely that the logical qubit will be error prone. But if the error rate of each individual qubit can be made to fall below a certain threshold, combining them could theoretically drive down the overall error rate of the logical qubit. “The promise of quantum error correction is that you can make arbitrarily good objects out of bad things,” said Brown.

    Last year, a series of research results from Google, Microsoft, AWS and Yale University confirmed that this theory works in practice. Google made the most prominent announcement with Willow, its most advanced quantum chip, introduced in December. It demonstrated that enlarging the grid of a logical qubit from 3x3 to 5x5 to 7x7 halved the error rate at each step. This was a major breakthrough, proving for the first time that it is possible to build devices with low enough error rates to perform useful calculations. 

    The natural extension of this idea is to simply build larger and larger grids of logical qubits to handle increasingly complex calculations. But there’s a hitch: A 7x7 logical qubit currently requires a total of 97 individual qubits — about half for holding data and the rest for error-checking. A useful quantum computer would need a minimum of 100 logical qubits, while a computer performing groundbreaking scientific calculations might require more like 1,000 or 10,000 logical qubits, which would mean somewhere in the region of 10,000 to a million individual qubits. For context, Google’s Willow chip has just 105 individual qubits.  

    So the challenge ahead is to build a quantum computer with enough logical qubits to perform useful calculations. That means building chips that contain far more individual qubits, while also finding ways to reduce the number of individual qubits required to build a logical qubit.

    Building chips with more qubits

    Adding qubits to a chip is not a simple matter. First, building high-quality qubits isn’t easy, and reliably adding more to a chip is a challenge. In the case of the biggest companies working on quantum computing, such as Google and IBM, the underlying technology is built on so-called superconducting qubits. These are microscopic circuits that are in some ways similar to those used in modern semiconductors, except the resulting chip must be cooled down to a temperature close to absolute zero to function. The quality of each tiny circuit must be very high for it to function properly. Each individual qubit can “really be affected by all kinds of things,” said Chow, including microscopic traces of contamination on the chip or slight variations in the process that is used to print them. While it’s possible to reliably print a chip with, say, 100 individual qubits, scaling that up to 1,000 or more is challenging, with the chances of dud qubits quickly escalating.

    To address this challenge and make reliable larger chips, IBM and Google are both working to refine production processes, said Chow and Chou. Other companies are exploring how to leverage the expertise of the semiconductor industry: John Martinis, a longtime quantum physics professor at UC Santa Barbara and former head of quantum computing hardware at Google, recently co-founded a startup built around that premise. “By using a semiconductor foundry, we can improve the reliability and quality of the fabrication, but also make more of them,” he said. 

    The second challenge of building chips with more qubits is the surrounding infrastructure they require. For context, photographs of existing quantum computers with 100 or so qubits depict a nest of wires and cables that are needed to cool and control each qubit. “I have to send in all these wires … my fridge can only take so much heat,” said Brown. Scaling the infrastructure up for a device with tens of thousands of qubits is physically impossible because there just isn’t enough space. This makes figuring out how to miniaturize current infrastructure systems a priority.

    There is another way to add more qubits to a computer, which is to network multiple quantum chips together. This is a significant focus at both IBM and Nu Quantum, said Chow and Palacios-Berraquero. The motivation is that this approach doesn’t require scaling up chips, nor as much shrinking of infrastructure, though it does require designing new kinds of quantum communication systems that allow individual chips to communicate with each other.

    All of these aims are  achievable. Engineering improvements will make larger chips possible, miniaturization of infrastructure is a fairly classic challenge, similar to the miniaturization of modern electronics, and quantum networking is well underway. Yet the pace of making all these improvements is currently unclear, which means that reducing the number of physical qubits required to make a logical qubit is an equally important priority for the future of quantum computing.

    Refining logical qubits

    Perhaps the most straightforward way — or at least, the most straightforward-sounding way — to reduce errors in a quantum computer is simply to make individual qubits themselves less prone to error. This would mean that fewer individual qubits would be needed to assemble a logical qubit, freeing up space on a chip. There are various efforts underway to achieve this, mostly by tweaking variables that can improve qubit quality, such as their physical structure, the materials used to make them and the processes through which they’re manufactured. “There are very specific [small] engineering tweaks,” said Chou. “[And] there are step changes that come from science, and we're always learning from those.”

    The other approach to improving how efficiently the qubits on a chip are used is to change the approach to error-correction. Google’s Willow chip makes use of an error-correction approach known as Surface Code, which has existed as a theoretical concept since the 1990s. It’s the go-to approach in the quantum community for a few reasons: It’s well-established theoretically, scientists know how it can be used to perform calculations between logical qubits, and it requires only that qubits be able to communicate with adjacent qubits. That last point is important, as the superconducting qubits being built by the likes of Google and IBM sit on a grid; enabling them to connect with qubits that aren’t directly adjacent would require more complex geometries or wiring.

    But Surface Code is “really inefficient,” said Palacios-Berraquero. More efficient codes have been developed that scale far more effectively than Surface Code, but they are less well understood and would require communication among a greater number of qubits, increasing the complexity of quantum chip design. Until alternative codes are better developed, Surface Code remains the industry standard. “I want to know exactly what to do to build [quantum computers] right now,” said Martinis. “If something better comes along, we'll pivot.”

    Closing in on utility

    The reality is that a combination of the approaches described above — figuring out how to add more qubits to chips and making smaller logical qubits — will be necessary to advance quantum computers. That is reflected in the engineering roadmaps Palacios-Berraquero referred to, which are increasingly being published by Google and IBM, as well as by other startups in the sector. They show plans for exactly these things: incremental increases in the number of qubits and gradual advancements that improve performance of individual and logical qubits.

    The goal is to create a device capable of running practically useful algorithms — what’s sometimes referred to as utility-scale quantum computing. IBM argues that it already provides this: Researchers are using its existing quantum devices to model some chemistry problems in a way that can’t be done on classical computers. But most of the quantum computing community believes that utility is still a little way off. While it’s difficult to make concrete predictions, because a quantum computer’s performance is about more than just the number of qubits, the general view is that the first truly useful calculations, which will likely predict the energy states of small- or medium-size molecules, will require somewhere in the region of 100 to 1,000 logical qubits. Meanwhile, factoring large numbers — the calculation at the heart of breaking the internet’s encryption — will require upward of 10,000 logical qubits, made up of as many as millions of individual qubits. That’s further out than Google’s publicly available roadmap currently extends. On IBM’s, it sits at 2033 or beyond.

    During a video call with Aventine, Martinis placed a hand at one side of the screen to indicate the location of current quantum hardware in performance terms, and a hand at the other to show where it needs to be if it’s to be useful for practical applications. “There's a big gap,” he said. “It's orders of magnitude.” In the past, it was never really clear if making that leap was possible. Now, he said, “it's just a matter of closing the gap.”

    Quantum Leaps

    Advances That Matter

    China’s DeepSeek could turn AI on its head. It’s been hard to avoid the excitement around a new AI reasoning model called R1 from a Chinese startup called DeepSeek, and there’s a lot to take in. First, R1 seems to be incredibly competent, competing with or beating contenders such as OpenAI’s o1 model in some benchmark tests focused on math and science, according to a research paper published by the company. Second, it has been made available on an open-source license — though without details of the training data — which means researchers will be able to use it, study it closely and verify its performance. But third, and perhaps most important, is that R1 is based on an underlying model called V3 that was designed to make far more efficient use of training data than models built by the likes of OpenAI, Anthropic, Meta and others. Partially spurred by U.S. export controls that have limited DeepSeek’s access to cutting-edge AI chips, the company has developed ways to make its models more efficient. The AI research institute Epoch AI explains that training DeepSeek’s V3 model required one-tenth of the computing power of Meta's comparable Llama 3.1 model. This runs counter to the narrative of the last two years, in which brute force scale — bigger data sets, bigger models, bigger data centers, more electricity and more CO₂ emissions — was the only path to building better models. It’s this that has sparked gyrations in the stock market; all of a sudden, investors wonder whether companies need all those Nvidia chips, and whether tech giants such as Google, Meta and Microsoft really do hold all the power. “Everything around DeepSeek is quite impressive,” wrote José Hernández-Orallo, a professor at the Valencian Research Institute for Artificial Intelligence, in an email to Aventine. “Basically any medium-size company on the planet can build [an OpenAI] GPT-4-level model in a few weeks [now].” It’s too soon to tell exactly what impact this will all have on AI more broadly — but at the time of writing, it feels that it could be profound. And Hernández-Orallo pointed out that combining the ideas that underpin DeepSeek’s data-efficient models with the computing power of OpenAI may provide “amazing” results.

    How we could trap CO₂ with water and rock. A simple approach to removing CO₂ from the atmosphere by washing water over rocks is gaining traction — though it remains to be seen if it will work at scale. The underlying concept, described by Scientific American, is straightforward: When water hits rock, traces of calcium and magnesium are released that draw CO₂ out of the atmosphere to form bicarbonate that gets dissolved in the water. The runoff then makes its way to oceans, where carbon is stored as bicarbonate for hundreds of years. The process can be industrialized by crushing rocks, spreading them out in a thin layer and letting nature do the rest — an activity referred to as enhanced rock weathering (ERW). A side benefit: The process can neutralize acidic soil, meaning farmers can spread rock to boost crop yields and capture CO₂ at the same time. Recent trials have shown that ERW can significantly boost the amount of carbon a given hectare of farmland captures, and some predictions suggest it could lock away two billion metric tons of CO₂ per year — about 20 percent of what will be needed to keep global average temperature from increasing more than degrees C. But there’s reason to proceed with caution: In some soils the approach appears to be far less effective, and the required mining, grinding and transportation of rock could wipe out carbon gains if the technology isn’t carefully rolled out. Still, startups are pushing forward. As many two dozen companies are working on ERW, according to Scientific American, and some are even starting to make headlines: The Verge reports that a group of companies including Google and Salesforce are paying $27 million to a startup called Terradot to capture 90,000 tons of CO₂ using ERW. The success, or otherwise, of these early commercial ventures could seal the fate of a promising technology. 

    What happens next with the H5N1 bird flu? Five years on from when Covid-19 began transforming our lives, there is a new potential pandemic threat in the shape of the H5N1 bird flu. The first known human H5N1 fatality in the U.S. earlier this month understandably made big headlines, but the virus is not a significant threat to the human population at this point. At the time of writing, there have been 67 confirmed cases of H5N1 in humans in the U.S., with 63 of those individuals known to have been exposed to either cattle or poultry, which both carry the virus. There is currently no evidence that the virus can spread among humans, but what happens next is important: Will the virus adapt so that it can jump between people, or not? There are two schools of thought on this, Mark Woolhouse, a professor of infectious disease epidemiology at the University of Edinburgh in Scotland, told Aventine. “In my view, the longer we go on where we have more and more cases and still no evidence of transmission, I think that's a good thing,” he said. The thinking there is that most pandemic-causing viruses are ready to spread as soon as they make the jump from animal to human. “The contrary view is that [when] it spills over into humans, it's getting the chance to adapt, and there will be more opportunities for evolution,” he added. Health researchers are closely monitoring genetic changes in the virus, because even tiny changes could enable human-to-human infection. The good news is that the U.S. is sitting on a stockpile of H5N1 vaccines that have been shown to be beneficial against the current strain of the virus, and pharmaceutical companies are working on new vaccines for bird flu. There’s also the potential to vaccinate poultry or cattle to contain the virus. How any of those vaccines would be deployed would become a political decision, though, and it’s currently unclear how the Trump administration is thinking about the situation.

    Long reads

    Magazine and Journal Articles Worthy of Your Time

    Friend or faux? from The Verge
    8,700 words, or about 35 minutes

    Is it possible to form a friendship with an AI? To develop romantic feelings for one? This story makes it very clear that the answer to both of those questions is an unequivocal yes, even for people who are aware of the underlying technology and capabilities of LLMs. And that raises all sorts of fascinating questions, for users, companies building AI companions, and society as a whole. What do you do if a code update changes the personality of an AI bot? What happens to the thousands of relationships facilitated by an AI companions startup if it goes bust? And what does the allure of anthropomorphism mean for the rest of us as we learn how to communicate with and behave toward AI bots? It is mind-boggling stuff for many of us, but it feels like essential reading at a moment when we’re all grappling with the role that AI plays in our day-to-day lives.  (To learn more about the implications of human-AI relationships, listen to Aventine’s podcast, When Bots Become Our Friends, hosted by Gary Marcus, or read the transcript here.) 

    Destination Mars, from Noema
    3,800 words, or about 15 minutes

    Setting up a colony on Mars has always seemed like something of a pipedream to most people, unless you’re Elon Musk. But President Trump seems increasingly supportive of Musk’s vision, and with the two men working together ever more closely, the next four years could see significant progress in the direction of the Red Planet. This essay takes a close look at what that could look like — in terms of who makes it happen, who has control over settlements that get established 140 million miles away, and whether there are cautionary tales from the colonization of the New World way back in the 17th century. What’s unavoidably clear is that the huge expense involved means that some degree of corporate control over any Mars colonization is almost inevitable — and that, left unpoliced, many of the more troubling elements of  the first English settlements in the Americas could easily repeat themselves.

    The Parting of Water, from Science
    2,600 words, or about 10 minutes

    Making green hydrogen is easy. The chemical reaction underpinning it, which uses electricity to split water into hydrogen and oxygen, was discovered more than two centuries ago, and the process was industrialized using renewable hydroelectric power in the early 1900s. But even with modern processes in place, manufacturing it is very, very expensive — about five times the cost of manufacturing gray hydrogen, which is made using a carbon-intensive industrial process that uses steam to break down methane to produce the gas. One of several keys to driving down the cost of green hydrogen will be building efficient, resilient devices called electrolyzers, which can use renewable energy to split water into hydrogen and oxygen without creating any carbon emissions. This story is a deep dive into the different technologies that are competing to make that a reality. None of them are a slam dunk yet, with safety concerns, durability and high costs among the issues that need to be ironed out before they’re ready for prime time. But some of the approaches show huge promise, with some lab-scale demonstrations already hitting targets for efficiency that it’s hoped that industrial versions of the process should achieve by 2050. (To learn more about green hydrogen and how it might be scaled, listen to Aventine’s podcast about it here, or read the transcript here.)

    logo

    aventine

    About UsPodcast

    contact

    380 Lafayette St.
    New York, NY 10003
    info@aventine.org

    follow

    sign up for updates

    If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

    © Aventine 2021
    Privacy Policy.