Newsletter Archive
Listen to Our Podcast
Dear Aventine Readers,
This month we look at the quest to create artificial general intelligence, or AGI. It has long been an endgame for many AI researchers, but there is little consensus on exactly what it is or how it will announce itself if and when it arrives. We try to get you some clarity.
Also in this issue: new robots that can train themselves, an effort by the Philippines to stave off looming job losses caused by the automation of customer service work, CRISPR-edited vegetables and the legal and ethical implications of brain computer interfaces (BCIs), which are getting better and better at reading our thoughts.
Thanks for reading,
Danielle Mattoon
Executive Director, Aventine
Will We Know When We've Built an AGI?
In 1950, the computer scientist Alan Turing proposed a test that he called the imitation game. It was designed to avoid the question “can machines think?” — too meaningless to warrant discussion, he thought — and instead sought to determine if a machine could exhibit behavior indistinguishable from that of a human. In the test, an evaluator reads a transcript of a conversation between a machine and a human; if the evaluator can’t identify which contributor is the machine, the machine passes the test.
In 1950, passing would have been extraordinary. Now — over 70 years later and with the arrival of ChatGPT and its ilk — the Turing test seems quaint. “People seem to have completely dropped [the Turing test] these days,” explained Anders Sandberg, a senior research fellow at the University of Oxford’s Future of Humanity Institute, “since [large language models] are so good at winning it.”
But the inadequacy of the Turing test doesn’t mean we’ve given up on measuring artificial intelligence. If anything, the sophistication of new generative AI models has only intensified the desire to identify, test and prepare for the possible arrival of what seems to be the endgame for AI — artificial general intelligence, or AGI.
Broadly considered to be a system that can solve problems in almost any domain, AGI could bring transformative changes to the world, ranging from massive and widespread economic benefits to the extinction of the human race, depending on whom you ask. It has also become the organizing goal for some of the largest AI firms today: OpenAI’s stated objective is to create “safe AGI that benefits all of humanity,” while Google DeepMind’s mission is “to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence.”
Developing a way to identify AGI and alert us to its potential arrival has therefore taken on a certain urgency. “We're at a very sensitive time in human history,” explained Henry Shevlin, associate director at the Leverhulme Centre for the Future of Intelligence in Cambridge, U.K.. “Even a dim vision in a crystal ball about what's coming, what's likely to happen, is going to be incredibly valuable.”
The challenge of preparing for AGI is that there is no consensus about the specific properties an AGI must have, though several of the experts Aventine spoke to pointed to work by Shane Legg, a founder of Google DeepMind, and Marcus Hutter, a senior researcher there as a good starting point: The pair define intelligence as “an agent's ability to achieve goals in a wide range of environments.”
With this general goal in mind, many scholars and technologists have tried to develop tests that could measure whether artificial intelligence systems meet it. Though this is far from an exhaustive list, some examples, include:
What these and other tests attempt to assess is the breadth and generalizability of an AI’s abilities — reasoning, planning, communication, problem-solving, dexterity and so on.
Given that an AGI will need an array of competencies, Gary Marcus, the host of Aventine’s AI podcast “Humans vs. Machines,” has proposed a smorgasbord of tests across which an AI could be assessed, including the challenge of assembling IKEA furniture.
Adding more and more tests isn’t without its own issues, though. “The problem with trying to tally them all to produce a universal ‘intelligence score’ is that we don't actually understand human-style intelligence mechanistically,” explained Rob Bensinger, the head of research communications at the Machine Intelligence Research Institute in Berkley. “We don't know which sub-skills are most important.”
On top of all of this, the way AGI arrives could also affect our ability to recognize it. There are, broadly speaking, two main schools of thought about how an AGI will emerge. The first, referred to as slow or soft takeoff, suggests that AGI will arrive gradually as the result of steady progress over the course of years or decades. The second, known as fast or hard takeoff, suggests that AGI will appear seemingly out of nowhere in a matter of weeks, days or hours.
In the case of soft takeoff, Sandberg points out, an AGI could have wide-ranging abilities and thus meet the threshold for generalizability, but its performance in specific tasks could “be so weak that it does not matter until it gets smarter”— meaning that it will be outperformed by narrower AIs for some time. In the case of a hard takeoff, an AI could appear that would immediately be able to pass one or maybe all of the AGI tests that have been proposed.
“Part of the reason AGI is a messy concept is because advances in different cognitive domains have been uneven,” wrote Shevlin in an email. “It may well be the case that the kind of system capable of wiping out humanity or revolutionizing science or transforming our economy or instantiating consciousness will still not technically be AGI, insofar as it happens to lag in some arbitrary domains.”
Still, the benefit of trying to peer into the future is that it helps us think about and prepare for all sorts of outcomes. Attempting to understand how we’ll identify AGI if and when it emerges remains just as vital now as was for Turing in 1950, however challenging it is.
Advances That Matter
Safely Freezing Organs for Transplant Could Become a Reality. Some things, like soups, freeze well. Others, like lettuce, not so much. You can put bodily organs in the second camp: Try to preserve, say, a kidney, by simply putting it in a freezer and the ice crystals created will destroy cells while the changing chemical composition inside the organ as water changes form could prove toxic. Finally, the thawing process can damage the organ if it happens unevenly. The ability to fix this process and safely freeze organs would be a life-saving development for hundreds of thousands of people worldwide; currently no more than 10 percent of people who need organ transplants get them, according to the World Health Organization. Science reports that significant progress is currently underway, thanks to gentle antifreeze-like chemicals and rapid cooling techniques to reduce damage from ice crystals. In rats, the new techniques have led to the successful transplant of a frozen and thawed kidney. Next stop: pigs. And if that’s successful, researchers hope that human organs won’t be far behind.
An AI-powered Robot Can Now Teach Itself. Google DeepMind published research describing an AI agent called RoboCat that is able to learn how to perform new tasks and operate entirely unseen robotic arms by observing as few as 100 demonstrations. The AI is trained on a dataset of various robot arms performing hundreds of different tasks and then exposed to demonstrations of a new task. At this point it is able to spin up a new version of itself that repeats the new task as many as 10,000 times. Those attempts are then assimilated into the main dataset so that subsequent models can execute the new task. This kind of self-guided learning is considered to be an important part of building future AIs that can improve without human intervention. The model’s versatility — the system is able to perform many tasks on many different pieces of hardware — was hailed as impressive by Eric Jang, former senior research scientist at Robotics at Google, in his analysis of the work.
Gene-Edited Vegetables Are Coming to Your Produce Aisle. Do you find mustard greens too peppery? Then let me introduce you to the North Carolina-based start-up Pairwise, which has used the DNA-editing tool CRISPR to remove a gene that’s responsible for the strong taste. The result — which is the first CRISPR-edited food to hit the U.S. market — is supposed to retain the health benefits of the vegetable while making it more appealing to consumers. The advance may represent an inflection point for engineered foods because, unlike modified plants of the past, CRISPR editing doesn’t insert genetic material from different species into existing plants. Instead, it speeds up a process that could theoretically have been a result of selective breeding, for instance by deleting genes. That could mean that the public — historically wary of GMOs, which also now have to be labeled as “bioengineered” — could become more accepting of the results.
The Future of Brain Computer Interfaces
In May 2023, neuroscientists in Switzerland published a breakthrough in the journal Nature: A paralyzed man regained his ability to walk by using his thoughts to control his movements.
This was just the most recent of several important achievements in neuroscience credited to brain-computer interfaces. Also known as BCIs, the most effective of these systems use hardware implanted in the brain to connect the brain’s electrical signals to a computer that can interpret them; the signals can also be connected to another device, such as a robotic arm, that actuates them. (Elon Musk’s Neuralink project, perhaps the best-known commercial endeavor in the field, aims to create an implantable brain chip that could restore movement to a paralyzed person, much the way movement was restored to the subject of the Nature article.)
Experts in the field agree that the primary barrier to widespread use of the technology is the hardware, which requires invasive surgery to implant and leaves patients with damage to their brain tissue. When the hardware becomes less invasive, the technology will likely become far more accessible to people with paralysis, neurodegenerative diseases like ALS and other impairments. And though BCIs used for mind-reading or next-generation video game-playing are still decades away, even near-term improvements in the technology raise profound ethical questions about who should have access to such life-changing technology, the privacy of brain data and who bears the responsibility for caring for people using BCIs.
We asked five experts for their thoughts on the state of the field and the questions that most concern them. Their comments are edited for clarity and brevity.
It will take mostly engineering efforts to miniaturize everything and to make a system that the patient can trust. The technology is there, the need is there, [but] it will take a few years before we can miniaturize the systems. Honestly, I’m quite excited because I see that more and more companies are investing in developing BCIs, the field in general is very active.”
— Dr. Henri Lorach, head of the Brain pine Interface unit at NeuroRestore, a research center affiliated with the Lausanne University Hospital
There are a lot of very basic practical questions that really have to be addressed now before we start doing this at any sort of scale with patients. Do these devices have to be removable? Can they be removed if somebody wants to be removed? What happens when somebody gets an implant as part of a clinical trial or even as part of a device that’s FDA approved and then for whatever reason the company that makes the device disappears? Let’s say that one of these devices works so well that it completely changes the life of a person with ALS. They go from not being able to communicate anything with the outside world, and then suddenly the company goes under. Are we now willing to take away that ability from someone who has it?”
— Dr. Matthew Leonard, associate professor of neurological surgery at the University of California, San Francisco Weill Institute for Neurosciences
It’s a big frustration for us that we have made many breakthroughs over the years, but it’s still very difficult to raise money. What we need is just money to bring all of this technology to the market, and to help hundreds of millions of people to use it. I think we are in the state where we can develop useful BCI technology, it’s just very expensive and it needs to be very practical to use for the patient. And then you need to have a company that dedicates a lot of money to developing a technology that will be reimbursed by insurance. The insurance companies need to reimburse the technology, otherwise it will just be a handful of individuals who will pay for this.”
— Dr. Gregoire Courtine, professor of neuroscience at École Polytechnique Fédérale de Lausanne and co-director of NeuroRestore
What I’m concerned about is legislation. Legislation is not there whatsoever. The legislators all around the world are doing reactive work. We see Congress react only when a system is being released. With ChatGPT, it’s like: ‘We all woke up, oh let’s do something?’ What are you going to do, put the genie back into the bottle? Right now is a perfect time to actually introduce legislation [for BCIs], because the system is not yet there. The true BCI that reads thought isn’t there. If that system arrives, it will be much too late.”
— Dr. Nataliya Kosmyna, research scientist at the Massachusetts Institute of Technology’s Fluid Interfaces group
Ultimately all of the BCI companies out there are splitting this really small population of potential patients, where a solution for those people would be to drill a hole in their head. Fortunately, not that many people are actually in a situation that is so dramatic. It’s difficult to claim really large market sizes, and traditional VCs don’t really fund that. Some of the most prominent companies out there are funded and mostly owned by billionaires with big egos, people who don’t necessarily think about it in terms of traditional market business. What keeps me up at night about the field is actually mostly that I feel like there is too much hype. And I am concerned about blowback if and when some of the technologies do not live up to the hype, because that’s something that might hurt a lot of promising approaches.”
— Peter Ledochowitsch, chief technology officer at Canaery, a neurotech startup digitizing the olfactory system in animals
If you’d like to learn more about BCIs, listen to Aventine’s interview with Dr. John Donoghue, one of the pioneers in the field, on our site, Spotify, Apple or wherever you get your podcasts.
Technology’s Impact Around the Globe
1. Ghana: Farmers across Africa will soon have a helping hand when it comes to tending their crops, in the shape of a new AI system. The tool, called Africa Agriculture Watch, was developed by the pan-African research group AKADEMIYA2063 and combines satellite remote sensing and machine learning to predict crop yields across the continent, SciDev.Net reports. The system will provide insights that can be disseminated to help farmers make more effective site-specific decisions about processes such as irrigation, fertilization and pest control to increase yields. The team behind it hopes to find ways to directly communicate findings to farmers in the future, too.
2. Philippines: What happens when a large chunk of your economy is based on an industry that could be automated away by generative AI? The Philippines may soon find out. Rest of World reports that the nation is home to about 1.6 million workers employed as the result of business process outsourcing, the practice of offshoring entire functions such as customer helplines and technical support. These jobs account for 7.5 percent of the Philippine economy and could soon be wiped out, as generative AI threatens to automate customer service work. Lawmakers and business leaders in the Philippines are now trying to figure out how they can upskill workers to make use of AI rather than simply watch as the technology eats their lunch. That’s just one example explored by Rest of World in a fascinating package about how labor forces around the globe could be reshaped — or decimated — by the rise of AI.
3. Ukraine: In hospitals and even bomb shelters across Ukraine, premature babies have found a lifeline in a portable incubator that has been specially designed to work in challenging environments. Created by a company called mOm, the incubators weigh just 44 pounds, fold in half and can run on inconsistent power supplies or a battery. So far, reports The Times of London, 75 of the company’s incubators have been put to use in Ukraine to help save more than 1,500 babies, and the nation’s ministry of health has requested 100 more. (The incubator is also being tried out in four hospitals that belong to the U.K.’s National Health Service.) mOm’s founder and CEO James Roberts received the Royal Academy of Engineering’s Princess Royal Silver Medal for his work on July 13.
Magazine and Journal Articles Worth Your Time
The Promise and Peril of AI-generated Software, from IEEE Spectrum
3,000 words, 12 minutes
Whether you’re technically minded or not, it’s fairly easy to ask ChatGPT or another chatbot a question, read its response and know whether it was helpful or not. It’s harder, though, for many of us to understand if the new systems write good computer code. And given that some general large-language models now exist to automate the process of writing software, it’s important to understand whether we should be letting AI loose on building computer programs. This article takes a close look at how these AI tools work and what concerns we should have about them being rolled out at scale.
When You Can Expect Your Flying Taxi, from The Financial Times
3,300 words, 15 minutes
“We were promised flying cars and instead we got 140 characters,” Peter Thiel famously said. That was in 2013 and now, a decade later, you might think he’d be able to make much the same quip. But this feature argues that the flying taxi industry is approaching an important moment, with companies preparing to take test flights in the coming 18 months that could secure — or potentially set back — the future of the technology. There are many hurdles these companies have to overcome, both technical and regulatory, if they’re to succeed. This piece takes a clear-eyed look at the challenges ahead and how the main contenders are approaching them.
How Chip Fabrication Could Reinvent American Industry, from MIT Technology Review
4,500 words, 18 minutes
What happens when you pour $100 billion into a postindustrial U.S. city? Syracuse, New York, is about to find out. Thanks to the incentives offered as part of the CHIPS and Science Act — passed by Congress last year to boost R&D, safeguard supply chains and generally help ensure semiconductor sovereignty for the U.S. — the chipmaker Micron decided to build as many as four chip fabrication plants in the city over the next 20 years. This story by MIT Technology Review takes a close look at how the investment of as much as $100 billion came about, where the money will go, what could go wrong and the potential upside for Syracuse and other cities if the bet pays off.