Listen now
Transcript for Season 4, Episode 8: The Race to Control AI
[ARCHIVAL TAPE]
Senator Josh Hawley: We could be looking at one of the most significant technological innovations in human history. And I think my question is, what kind of an innovation is it going to be? Is it going to be like the printing press, that diffused knowledge and power and learning widely across the landscape? Or is it going to be more like the atom bomb?
[MUSIC IN]
Gary Marcus: That was Josh Hawley, Senator from Missouri, addressing a Congressional hearing on artificial intelligence on May 16th. I was there to testify, too, along with Sam Altman, the head of OpenAI, and Christina Montgomery, Chief privacy and trust officer at IBM.
A year ago, this wouldn’t have happened. Hardly anyone outside of the field was thinking about AI. In 2019, I co-wrote a book about how we could build AI we can trust. But back then, few people seemed concerned.
Then ChatGPT came along, and suddenly everybody — even U.S. Senators — became worried about AI — and whether we can rein it in.
[ARCHIVAL TAPE]
Senator Richard Blumenthal: I alluded in my opening remarks to the jobs issue the economic effects on employment. Uh, I think you have said, in fact, and I’m gonna quote, “development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.” End quote. You may have had in mind the effect on jobs, which is really my biggest nightmare in the long term. Let me ask you what your biggest nightmare is and whether you share that concern.
Sam Altman: Like with all technological revolutions, I expect there to be significant impact on jobs, but exactly what that impact looks like is very difficult to predict.
Gary Marcus: Altman gave some reasons why he wasn’t all that worried about jobs and landed on an optimistic note.
[ARCHIVAL TAPE]
Sam Altman: So there will be an impact on jobs. We try to be very clear about that, and I think it will require partnership between the industry and government, but mostly action by government to figure out how we want to mitigate that. But I’m very optimistic about how great the jobs of the future will be.
Senator Richard Blumenthal: Thank you. Let me ask Ms. Montgomery and Professor Marcus for your reactions to those questions as well. Ms. Montgomery–
[MUSIC OUT]
Gary Marcus: Montgomery spoke about job retraining, and then Senator Blumenthal came to me. I started by talking about the unreliability of current AI, but I also wanted to flag something – I didn’t think Sam Altman had really answered the question.
[ARCHIVAL TAPE]
Gary Marcus: And last, I don’t know if I’m allowed to do this, but I will note that Sam’s worst fear, I do not think, is employment. And he never told us what his worst fear actually is. And I think it’s germane to find out.
Senator Richard Blumenthal: Thank you. I’m going to ask Mr. Altman if he cares to respond.
Sam Altman: Yeah, my worst fears are that we cause significant — we, the field, the technology, the industry — cause significant harm to the world.
Gary Marcus: Significant harm to the world. And that’s from the head of Open AI, the maker of ChatGPT.
[MUSIC IN]
AI could conceivably wipe humanity out altogether by helping develop new bioweapons or even triggering a nuclear attack. How do we prepare for a world increasingly dominated by an AI that we don’t even know how to control? That’s what the Senators wanted to know, and that’s what we’ll be talking about today.
I’m Gary Marcus, and this is the final episode of Humans vs. Machines.
[MUSIC OUT]
Gary Marcus: Alondra Nelson has been in the vanguard of thinking about these issues for a long time. She's now a chair professor at the Princeton Institute of Advanced Study, and before that, she was at Columbia University. In 2021, she joined the Biden administration in the Office of Science and Technology Policy. Soon, she started working on AI, creating guidelines for governments, companies, and individuals to use the technology in a way that would keep us all safe.
It was a monumental task and a huge opportunity. She compares it to the 80s and 90s and the dawn of Internet regulation that led to the dominance of social media and a lot of the ills it has caused.
Dr. Alondra Nelson: You know, what we face, it would be like going back to 1986 or 1994 and having someone say, ‘You need to regulate the internet, or you need to regulate the worldwide web.’ I mean, so what we are dealing with, artificial intelligence and generative AI, is that sort of broad of scope. And there would've been no one answer to that question. And so, you know, I think we could have done it better, and I hope that we have learned from that example and can do something much more coordinated, including parts of international coordination.
Gary Marcus: And I think part of why a lot of us feel urgency right now is these things get locked in. You know the wrong choices were locked in on social media, and we don't have that much time to sort this all out.
Dr. Alondra Nelson: We truly do not. Yeah. I mean, to the extent that we even made choices around social media, you know?
Gary Marcus: That’s right. We don't want the choices made by default ‘cause we didn't even make them—
Dr. Alondra Nelson: Correct. Yeah, that's right.
Gary Marcus: In so many ways, Dr. Nelson and I have come to the same place on so many issues, and so I was curious about her background.
Gary Marcus: What was your childhood like, and how did you wind up in AI?
Dr. Alondra Nelson: I am the child of two veterans—my father in the Navy, my mom in the Army. My mom was a cryptographer and worked, I don't know, you know, a mile or so underground. My mother would go on to work for the Department of Defense as a computer engineer and programmer working on those large mainframe computers. And so, that's the seeds, the literal seeds of my actual life.
Gary Marcus: And when was it that you realized that you were gonna focus on computers? So, I mean, you saw computers, like, up close and personal when they were huge and enormous and cumbersome, but that didn't mean they were your life calling.
Dr. Alondra Nelson: I think it wasn't the life calling, is that they were just part of life. So it was the same time my mother was working on these large mainframes and doing that work. It was also the case that she would come home and she would say, “Oh, the guys at work sent this home for you,” and it was Pong, you know, it was these sort of, very first computer games and these sorts of things. And so, the technological piece was just a part of life, and then there was a time when it all sort of came together.
So, you know, I leave UCSD, I go to graduate school at New York University. And so it was a kind of tech boom time in New York City. And it was in that moment that I started thinking about issues of technology and inequality. But it was also the case that I always brought with me another perspective. And that was, you know, a kind of outsider perspective often and also a perspective, and this really comes from my parents who very much gave me an appreciation of the history of our country and that it had been a challenge for black people, for marginalized communities to enter these fields.
So one could both work in science and technology, have aspirations and optimism about what it might accomplish, one could make their living, and raise their family in this space, but also, I think, have a real critical appreciation for the fact that sometimes we get it wrong and sometimes when we get it wrong, it causes harm and damage.
Gary Marcus: Do you mind talking about one or two historical cases where we have gotten it wrong?
Dr. Alondra Nelson: Sure. And this is more from the world of medical research, of course, is the Tuskegee Syphilis experiment, which is quite notorious. That was a 40-year experiment that was started by the US Public Health Service, right? So, it was the federal government doing the research. So the Tuskegee Syphilis study goes from 1932 to 1972, and in 1972, there's a, you know, first page AP story in the New York Times. That basically exposes the Tuskegee Syphilis experiment.
[MUSIC IN]
Gary Marcus: What came out was that the United States Public Health Service had been conducting a study of almost 400 poor Black men with syphilis for forty years. The men had been used as guinea pigs, going without adequate medical treatment even after an effective therapy was discovered. Many of them died, and the study continued long after some people had first raised questions about its ethics.
Dr. Alondra Nelson: This was African American men, sharecroppers in Alabama, who had, uh, syphilis, and although there was a cure for the disease, scientific researchers allowed the course of the disease to carry on into its late stages, including, into, you know, fatal stages for some of these men.
Gary Marcus: I, I don't know, actually know the history, but I've always imagined that the experiment must have at least been in the background when people started coming up with human review boards for like psychology experiments and, and so forth.
[MUSIC OUT]
Dr. Alondra Nelson: Sure–
Gary Marcus: Maybe you can elaborate the history of it.
Dr. Nelson: Yeah, there’s several kinds of commissions and the like. And the thinking about research with human subjects and experimentation. One of these is the issuance of something called the “Belmont Report,” and the Belmont Report comes to establish what we know now to be bioethics principles. You know, not doing harm in research. And then it goes on to be institutionalized and things like institutional review boards and, you know, informed consent, um, and research on campuses and in research centers.
Gary Marcus: The Belmont report was a postmortem on a huge failure in scientific ethics; we don’t want to have to write similar reports for AI. But when I think about the kind of bias in facial recognition that’s led to people being falsely arrested or about the harms that AI might cause to democracy through massive misinformation, I worry about whether we are getting ahead of our skis — rushing the technology when we haven’t figured out the ethics, yet again.
Dr. Alondra Nelson: We came into office with, there already being, I think, concerns voiced about facial recognition technology, about the use of these technologies and surveillance, about them leading to false arrests, and the particular burden of these technologies because the data was biased and it was not safe and effective particularly for communities of color, for people with dark skin. So, you know, that conversation was happening. There were conversations coming out of the last presidential election about information integrity, which we sometimes call misinformation and disinformation. There were conversations about mental health harms resulting from, um, you know, some engagement with social media, particularly with regard to young people. And what these all shared in common was the use of AI and algorithmic amplification.
Gary Marcus: What Dr. Nelson is talking about here, and what she’s devoted her career to trying to fix, is the sometimes disastrous collision of science and technology with human values. That tension has always been there, but it is magnified by AI.
Machine learning systems are often extremely good and fast at achieving specific goals. But they can do so by contradicting our deeper values and priorities. And sometimes, we don’t know until it’s too late.
One of the people who has thought a lot about these issues is Brian Christian, a visiting scholar at the University of California. His book “The Alignment Problem” came out in 2020.
Brian Christian: I think one of the earliest formulations of what we would now call the alignment problem comes from this very seminal essay by Norbert Wiener, the MIT Cyberneticist. In 1960, he published an essay that's called Some Moral and Technical Consequences of Automation. And, you know, it's one of these very prescient essays that reads like it could have been, you know, from last week or something.
And he says, ‘If we use to achieve our purposes, a mechanical agency with whose operation we cannot interfere, once we have set it going, then we had better be quite sure that the purpose we put into the machine is the thing that we truly desire.’
[ARCHIVAL TAPE]
Norbert Wiener: And I know, very great engineers would never think further than the construction of the gadget and never think of the question of the integration between the gadget and human beings in society. If we allow things to a reasonably slow development, then the introduction of the gadget, as it comes in, may hurt us enough to provoke a salutary response so that we realize that we cannot worship the gadget and sacrifice the human being to it.
Gary Marcus: So, that's a perfect example of the alignment problem, right? The system doesn't understand what the human intention is, and everything goes haywire. That is kind of what the alignment problem is right?
Brian Christian: That's exactly right. And I think it has been fascinating to me to watch this idea, which had existed in the computer science literature now for 60 years, end up arguably one of the central concerns of the field of computer science at this point.
Gary Marcus: The alignment problem is much more pervasive than a lot of people realize. One case where this has often come up is in software that makes employment decisions. We don’t want software systems to discriminate based on things like race and gender. But often they do, accidentally, because of the nature of the data the systems are trained on and the way the systems are built, which is basically to mimic past data.
These systems are widespread but often in the background. Christian talks me through an example that could have affected health care for millions of Americans.
[MUSIC IN]
Brian Christian: So there was a 2019 study, uh, that was done by my Berkeley colleague, Ziad Obermeyer, and uh, a number of other researchers looking into this system that's called “The Optum Algorithm” and this is used to prioritize who should be seen first, and this is applied to something on the order of a hundred million people a year.
Gary Marcus: A massive undertaking, to be clear. And so they needed to find a reasonable way to prioritize patients.
Brian Christian: They used what seems, at first, like a very sensible proxy, which is the cost of people's care or the predicted cost of their future care. And this seems reasonably intuitive. You know, if I go to the hospital and my bill is millions of dollars, then it's reasonable to infer that I was quite sick. And so you can say, let's predict the future cost of this person's care, and we'll prioritize the people that are gonna need the most care, as measured in dollars.
It turns out– and this is the investigation that this research group did, that in the real world, there are people who just fundamentally have different costs of care. Not because they were less sick but because they don't have the same access to high-quality facilities. Maybe the doctor doesn't take them as seriously. The doctor doesn't refer them to a specialist or doesn't refer them to inpatient care or something. Maybe there are people for various reasons that aren't able to get off of work in order to go to the hospital. Or they don't live near a hospital that offers as high-quality care or has, you know, that same level of expertise. There are various socio and demographic reasons that people's healthcare might cost less, but not because they're less sick.
Gary Marcus: Black patients accounted for an average of $1,800 less than white patients who had the same number of conditions. Essentially, the system predicted that Black patients, on average, didn’t need an additional level of care. Not because they didn’t actually require more care but just because, historically, they had made fewer demands on the healthcare system.
Brian Christian: And this turns out to be a major problem because once you have a system that is now, at scale, you know, at the level of hundreds of millions of patients, prioritizing people on the basis of their cost of care, then you take a group of people that was already essentially receiving a lower standard of care relative to someone with a similar health need, and now you're systematically, and in an automated way, further deprioritizing them.
Gary Marcus: The algorithm wasn’t designed to take race into account. In fact, it was consciously designed not to look for racial differences. But the system wound up perpetuating past biases anyway; it wasn’t aligned with what the creators wanted. Getting this stuff right is tricky business.
[MUSIC OUT]
Brian Christian: You know, this is a fairly standard way to approach a problem like this, but it can lead to these societal scale harms, and you know, these systems can be in place for many, many years before they're formally audited. And, um, I think that’s the sort of thing that we are unfortunately going to be seeing more of as we increasingly automate various aspects of society.
Gary Marcus: I care about this stuff a lot and have for a long time. My late father, for a while, worked for the Maryland Human Rights Commission, doing discrimination law. My own first paid gig, as a child, was writing computer programs to help him figure out the statistics in a discrimination case. If you just perpetuate history, you don't come to the values that we care about, about having equal opportunity in a society.
If we can’t trust machines to follow our values with respect to equality, one of the most straightforward alignment problems there is, we certainly can’t trust them to take care of humanity as a whole either. Soon, we could have AI wired into our most crucial systems. Our electrical grid. Even nuclear weapons. How can we be sure it will do what we actually want?
Brian Christian: When I think about worst-case scenario, I would put human extinction on that list. I take those concerns very seriously, and I think it's worth people thinking about, you know, the number of people that are working full time specifically on extinction risk from AI is like in the dozens. Um, I don't think that's too many people, right? So, you know, people argue that this gets too much air time on social media and so forth. But in terms of the actual resources being allocated, there's almost certainly more people who spend their full-time job scanning for asteroids that are gonna collide with Earth than people who work full-time on extinction risk from AI.
One way to think about AI and extinction risk is to see AI as something that could exacerbate the existing extinction risks. So, if you think there's a chance that nuclear war could cause human extinction, you can imagine AI as the sort of thing that could potentially exacerbate nuclear war, and the same thing is true in bioterrorism, AI is the kind of thing that you can imagine making it easier for a random malevolent person to, you know, close the knowledge gap between whatever their bad intentions are and the actual execution of how to do that.
Gary Marcus: We can’t just presume that AI is necessarily going to have our best interests at heart. Dr. Nelson tackled these problems during her time in government by co-developing something called the Blueprint for an AI Bill of Rights.
The Blueprint for an AI Bill of Rights isn’t yet law; it’s more like a thought experiment to figure out a framework for understanding how everyday life could be changed by AI and what protections people should have. For example, people shouldn’t have their work lives affected by biased or flawed algorithms; people should know when an AI system is being used in a decision that might affect them, and they should have the opportunity to understand why a system made that decision and to appeal it.
Dr. Nelson traces what she was trying to do back to the history of the office she helped run, the White House Office of Science and Technology Policy.
Dr. Alondra Nelson: The White House Office of Science and Technology Policy is established with legislation in 1976, and part of what that legislation says, this is not quite verbatim, but close, is that the role of government should be to maximize the beneficial consequences of technology and minimize the foreseeable and injurious consequences.
Gary Marcus: That framing of the OSTP’s mission just so resonates with me, like every day, I'm basically thinking about that with respect to AI. How do we make the most of this without causing so much harm and just every day, that's really all I think about.
Dr. Alondra Nelson: When a new technology comes online, no matter how transformative and extraordinary it is and what we're seeing in the public sphere is absolutely transformative. There are things that we can anchor in even as we are pivoting to figure out what's new and how we need to think in new ways. To be clear, I mean, part of why the blueprint for an AI bill of rights is aspirational is because even some of what we were recommending and suggesting as a solution set were not the norms of the industry or the norms of even academia.
Gary Marcus: It’s all very well for the government to come up with a framework. But right now, AI is mostly being developed by Silicon Valley, and since the release of ChatGPT, there has been a gold rush in AI, with money flowing to private companies trying to capitalize on its potential. How can we make sure those companies make decisions that are good for all of us and not just for investors?
Dr. Alondra Nelson: I would want to reject this sort of sense that it's the government's responsibility solely to sort of fix it. I feel like, you know, part of how the debate is being framed, I think, by some of the sort of big tech entrepreneurs is, you know, “We did the most that we could, we created this great thing, and we're gonna keep shipping product, but we kind of wash our hands of the problems. This is where we want the government to come in.” And that's just, that's unacceptable. It is a choice for companies to use engineering processes and then ship products that don't offer accountability or transparency into the work.
[MUSIC IN]
And so, I think what I find frustrating is that this is often conveyed to the public as being technological constraints and kind of mechanistic limits that we just can't possibly know, and there's nothing that we can possibly do about this, but that these are choices that are being made.
We've got other imperfect examples. So we've got pharmaceuticals, you've got things like the FDA in which we turn it back onto companies to demonstrate that their products are safe. They have to provide the data set, and then government sort of analyzes it, approves, doesn't approve, and the like. So I think there's lots of ways to think about how to do that, regulatory, legislative, but much more has to be expected from the companies, and it's gotta be turned back on them to do it.
Gary Marcus: Amen.
Gary Marcus: In my view, Dr. Nelson is exactly right; we have to insist that the companies genuinely provide accountability and transparency. If the current technologies aren’t up to the task, we should demand better.
Dr. Alondra Nelson: Some very ambitious, very smart scientists, technologists, computer scientists have, over the course of their careers, you know, be their 30-year careers or 3-year careers, have created something quite powerful and, now I think many of them are saying, you know, we gotta do something to fix it and, you know, let me help you fix it. To which I am replying. Um, you know, you broke it. Let us all help fix it. Right?
Gary Marcus: And now we need all hands on deck.
Dr. Alondra Nelson: And now we need all hands on deck. And those all hands, um, you know, it's not quite disqualifying that you broke it, but you know, you need to open up the space for other people to be part of the conversation.
Gary Marcus: I give you a second Amen.
Dr. Alondra Nelson: [Laughs]
[MUSIC OUT]
Gary Marcus: In the year since we first planned this podcast, the world of artificial intelligence has totally changed. I used to spend most of my own time on the technical side of AI, wondering: how do we make AI smarter, more reliable, and more trustworthy?
Now, I spend most of my time thinking about what we as a society can do to make sure that the coming AI world is one that's good for humanity as a whole. And not just for a few companies. Since I spoke at that Senate hearing, I have been talking to people in governments around the globe, trying to help us get this right.
What I see-- everywhere I go-- is a genuine hunger to find the right balance between regulation and innovation.
Here's Senator John Kennedy, a Republican from Louisiana, asking me about the way forward.
[ARCHIVAL TAPE]
Senator John Kennedy: Professor Marcus, if you could be specific, this is your shot, man [Gary laughs] talk in plain English and tell me what, if any, rules we ought to implement, and please don't just use concepts, I'm looking for specificity.
Gary Marcus: Number one, a safety review like we use with the FDA prior to widespread deployment if you're going to introduce something to 100 million people, somebody has to have their eyeballs on it.
Senator John Kennedy: There you go, okay, that’s a good one. I’m not sure I agree with it, but that's a good one. What else?
Gary Marcus: You didn't ask for three that you would agree with. [crowd laughs] Number two, a Nimble monitoring agency to follow what's going on, not just pre-review but also post as things are out there in the world with authority to call things back, which we've discussed today. And number three would be funding geared towards things like AI Constitution, AI that can reason about what it's doing, I would not leave things entirely to current technology, which I think is poor at behaving in an ethical fashion and behaving in an honest fashion, and so I would have funding to try to basically focus on AI Safety Research.
[MUSIC IN]
Gary Marcus: I stand by what I told Senator Kennedy, and there are a lot of other critical steps as well. To begin with, we need both national and global AI agencies.
At home, we need to make sure that someone who eats, sleeps, and breathes AI runs point on all the many risks and opportunities that AI introduces -- and has the staff and expertise to do that well. At the global level, we need cooperation on standards and monitoring and coordination to address risks.
To fight misinformation, we should insist that all AI-generated content be labeled as such.
Maybe most important: it can’t just be tech companies and governments working together. Independent scientists need to play a critical role in monitoring, regulating, and steering the future of AI along with the tech companies. Ethicists and, more broadly, civil society as a whole --- all need to have a voice.
There are a lot of ways this could go, some good, some not.
Optimistically, we might get a global AI agency this year – lots of people are actually taking that idea seriously - and begin to regulate AI in a smart and thoughtful way. Next year, more research money could get poured into Responsible AI, attracting talented programmers and ethicists. By 2025, new companies and new tech might emerge. And by 2029, AI could be contributing massively to the world, addressing climate change, medicine, eldercare, and other tough societal problems.
But there could be a bleaker future, too, in which we fail to agree on a way forward.
By 2025, a small number of AI companies could quickly become far more powerful than most governments, defeating any attempts to rein them in. By 2027, we might have more powerful AI systems, leading to job losses and widespread unrest. By 2029, AI might be embedded deeply throughout critical infrastructure like the electrical grid and our defense systems. Misalignment could cause mayhem. The result could be conflicts and deaths, both deliberate and accidental. Chaos could follow, leading to anarchy.
The choices that we make now will shape the next century. We have to get this one right, and we don't have a lot of time to waste.
Let's build a thriving AI, one that's good for everybody.
Thank you for listening to our series; the tale is not over yet.
AI will continue to grow and change; more and more people will use it. Nobody knows for sure exactly where all this wind up, but we hope we have given you some perspective-- on where it’s been and where it’s heading and some tools to help you think about the world to come.
I'm Gary Marcus, and this has been Humans vs. Machines.
[MUSIC OUT]