Listen now
Transcript for Season 4, Episode 3: What Happens When AI Takes The Wheel?
[THEME MUSIC]
INTRO - ARCHIVAL MONTAGE:
Tape: It's a test model of the car of tomorrow, a car that drives itself.
Tape: Eventually all cars will be self-driving.
Tape: Let's say at some point in the not-so-distant future, you’re barreling down the highway in your self-driving car.
Tape: But the transition to the future hasn't always been smooth…
Tape: You're like, “the future is here,” and then in a moment you're like, “wait a minute this isn't working at all.”
Gary Marcus: People have been dreaming of self-driving cars for decades. And in the past ten years, it’s been Silicon Valley’s turn, building automated car systems with artificial intelligence.
I’m Gary Marcus, cognitive scientist, AI researcher and AI skeptic. And I want to figure out: how can we make this all work? This is Humans vs. Machines.
[THEME ENDS]
In the last decade, over 100 billion dollars have been invested into self-driving cars. In 2012, Sergey Brin, Google’s co-founder, told an audience that he expected driverless cars to be here by 2017. Elon Musk, Tesla CEO, has been promising driverless cross-country trips practically every year since 2014. But we’re still not quite there.
[MUSIC IN]
Driverless cars do exist now. Kind of. You can get a ride in a demo in San Francisco or Phoenix. But they’re pretty limited, for when you can use them, and where you can go. In most cities, you can’t use them at all.
It’s not hard to see the appeal… Cars where you can simply punch in a destination and arrive. You could watch a video, tune out, take a nap, play with your kids, or send text messages on the way to work without worrying about crashing into other cars. They would be a huge relief from the stress of traffic. For people with disabilities and seniors, driverless cars could provide a new level of independence.
And there's been plenty of bad news. Companies gone bankrupt, ambitious initiatives delayed or abandoned altogether. In the United States, a government report found there were almost 400 crashes involving autonomous car technology in the 10-month span from July ‘21 to spring ‘22. Six of those crashes were fatal, and others caused serious injuries.
Why? After all that investment, all the hype... why are they still not here? And will they be any time soon?
[MUSIC OUT]
Cade Metz: If you're a reporter, you get into a driverless car that Google is testing and they take you around the block, basically. You know, you do, you do a three minute drive around Mountain View.
Gary Marcus: Cade Metz is a journalist with the New York Times who has spent much of the last decade covering artificial intelligence. And he’s spent a lot of time in self-driving cars.
Cade Metz: Even though I’ve been doing this for a decade, you get into a car that literally has no driver in it? It's amazing, right? You, you slide into the backseat, you close the door, you hit a big red button, and the car takes off. And there is literally no one in the front seat and the steering wheel is turning. It is eye-opening to say the least. It's not something you've ever seen. It's remarkable. It's impressive. It's like something out of a Hollywood movie.
Gary Marcus: So how do self-driving cars actually work?
The first thing to realize is, they just aren’t human beings. Human beings mostly rely on vision, and a little bit of hearing to drive. We use our eyes, we hear some sirens. Driverless cars use sensors - a lot of them - to figure out what’s going on. They use cameras, radar, some of them use sonar; some use lidar, which bounces a laser off objects to figure out how far away they are. [LASER SFX] Those sensors are what AI has to put together. We’ll talk more about that in a minute.
Each company – Tesla, Google’s Waymo, GM’s Cruise, and so on – has built its own unique combination of sensors.
Cade Metz: If you're Waymo or you're Cruise, what you do is you go into San Francisco and you drive around with safety drivers and you map everything. You build these three dimensional maps of everything in the city or everything in a, in a certain geography within the city. And that’s one of the ways you constrain the problem. If you're Tesla, you're not doing that. They don't have lidar that can build those three-dimensional maps.
Gary Marcus: Teslas currently use eight onboard cameras, but no radar or lidar. I told Cade that the way he described all these cars made it seem like each one had a different personality.
Cade Metz: I like that you describe it that way. They, they do have their own personalities and there are these key moments where you can better understand what is happening. Lidar is based on light. Right? Sends light out. It bounces off things and you get it back. Well, in situations like when you're crossing a bridge and there's all this reflection from the bridge or, you know, you get reflection in other, other ways in the middle of a sunny day. That can cause issues with the lidar.
A radar, which is based on sound, can deal with some of those situations better. Each has its strengths and its weaknesses. And the philosophy of companies like Waymo, for instance, is you want to get as many of those sensors into your car, get all their strengths in there and use them together. That is a super hard thing to do, and that's part of the reason that Musk has gone in his direction is ‘cause you don't have to deal with that if you just say, “All right, we're just gonna take what the camera tells us.” Both are super hard.
Gary Marcus: When I grew up, I was a fan of the space shuttle program, and one thing that stuck with me was they always had redundancy in every system. So, like, there were three computers that backed each other up. I mean, they weren't perfect, but they did pretty well considering the crazy things they were doing. I've always been more towards the Waymo side that you need redundancy because no one signal is gonna be enough. Where are other places? So Waymo's on one side, Tesla's on the other. Are most of the other big players closer to Waymo?
Cade Metz: Everybody else is closer to Waymo. Again, it's a particular philosophy. It makes a lot of sense. You want to make sure you've got a safety net here, a safety net there, a third safety net in the form of another sensor over here. What's hard about that is taking the information from all those different types of sensors — camera, lidar, radar – and consolidating all that, taking all these signals in. And sometimes they're gonna be contradictory. Right? Sometimes the radar is gonna tell you something different than the lidar and you've, and you've gotta find a way of deciding which one is right, so to speak.
[MUSIC IN]
Gary Marcus: This is where artificial intelligence comes in. At every moment the car essentially has to do two things. Perceive the world: What’s going on around me? And make decisions: Should I stop, should I go, should I turn?
AI coordinates the data from all those sensors to create that three-dimensional map of the space around the car, that Cade mentioned. And then makes decisions based on those maps, and ultimately sends commands to the engine, brakes and steering to respond.
Once we humans get comfortable driving, we do all this without really even thinking about it. But how do you get a machine to do it? You could try to write a computer program where you imagine every turn you might need to make, every decision you have to make, but it’s too hard to anticipate everything.
So nowadays everybody is banking on machine-learning and big data. The more miles you drive, the better the data you have. Companies use their driving data — sometimes even simulated data, like you might find in Grand Theft Auto — to train machine-learning systems, to help cars figure out what to do in different circumstances.
What that means is, these cars can work great on scenarios that are familiar from their training set – say, stopping at a red light on a sunny day. But they sometimes struggle when they have to deal with something new, something out of the ordinary. Sometimes this all works out, and sometimes it doesn’t.
Cade Metz: What people don't see, and some people almost refuse to see and talk about, is, is the gap between where these types of systems are and perfect. Right? And that gap, um, you can characterize it as small, or you can characterize it as large, but it's important. Right? The gap between what these systems can do and the situation where they're perfect and they can drive on any city street in any condition any time of day and be safe. Right? There is a gap there.
Gary Marcus: To Illustrate that gap, I asked Cade to give me an example of a ride he took in late 2022.
Cade Metz: So you take off through the streets of San Francisco and you're enjoying this and, um, and then you think, “Wait a minute. Why are we going so slow? Why is it taking me 20 minutes to get to the top of Nob Hill? It shouldn't take this long. If I was in an Uber, it would take half the time.”
Gary Marcus: Presumably it takes so long because the engineers want to keep the cars out of trouble; they know it can only handle certain conditions, so they divert around potentially tricky situations.
Cade Metz: And then there was this great moment where we pull up to the light, um, uh, in, in the middle of the Richmond district and these teenagers who were like, sit had, they have their windows open in this white Mercedes Sedan…
[SFX KIDS YELLING]
So they're like sitting on the, on the edge of the windows with their heads sort of bobbing up, you know, outside the car.
Not just their heads, their torsos. Right? They're like joyriding and they look to their right and they see this car with no driver in it, and they are just dumbfounded and they start screaming, “Where is the driver in this car?” And you say, “There is no driver.” And, and they're kind of amazed. And you realize as you go through the city that this is an amazing thing. Right?
A couple more blocks go by and the teenagers have sped ahead of you and then you catch up with them. And the car that you are in, the Cruise car, clearly thinks that they are pedestrians because they're hanging outside their Mercedes Sedan.
And it jerks to the right to avoid what it thinks are pedestrians walking down the street.
Gary Marcus: Wow.
Cade Metz: You know, who knows behind the scenes what is happening there, I can't see inside the software of the car. But there was a clear jerk to the right as we approached these teenagers again.
Gary Marcus: The car most likely jerked because it didn’t fully understand the situation. In all the billions of pieces of data the car's system had been trained on, probably nothing exactly matched a bunch of teenagers leaning out the windows of a driving car. So it maybe guesses the teenagers are pedestrians, and makes an emergency maneuver that wasn’t really necessary.
No matter what kind of self-driving car you are in, no matter how it senses its surroundings, the biggest problem is what we call edge cases. Edge cases are unexpected things that aren’t in the data set that the car is trained on. So the car doesn’t really know how to deal with it.
Cade Metz: Another one I had recently, I was in a Waymo car in San Francisco and there was a safety driver behind the wheel. We reached a stop light that was out and there was a traffic cop directing traffic and they were waving us through, like vehemently waving us through the intersection and the car just stopped and would not go, and eventually the safety driver had to take over. These are hard things to do. I'm sure there are cases where it does recognize what is happening and it responds accordingly. But, you have to find a way of dealing with those edge cases.
The question is, like, “Can you build technology that is as good as us at dealing with, with that kind of chaos, or better than us?” It's probably gotta be better than us.
[MUSIC IN]
Gary Marcus: In order for driverless cars to be safe, as a society we need a way to systematically test edge cases, but we don't even know how to do such a thing.
[AI ADDICT VIDEO]
John Bernal: Let’s throw objects in front of my car, and see how well it performs.
Gary Marcus: One person decided to take matters into his own hands: John Bernal, known online as AI Addict. He worked for Tesla for a while, helping them collect data, and got fired after starting his YouTube channel, where he tests how self-driving cars react in unexpected situations.
Here he’s placing a wooden shipping pallet in the road and the car swerves erratically.
[Clip] [car screeching]
Gary Marcus: Now things get really wild. He has a friend throw a BBQ grill on the road, right in front of the Tesla. The video is pretty frightening.
[Clip] [noise!]
John Bernal: I didn’t do any acceleration or the braking there, but I did do the steering. That is interesting but it can’t auto steer around an object, or break, really, in a significant time…
Gary Marcus: In the video, the car doesn’t get out of the way, so he has to swerve manually, at the last moment. It’s terrifying. Of course, it’s not real life. Nobody is going to deliberately throw a BBQ grill on the road — I hope — but one could fall off a truck, or some other crazy thing could. The point is we need to know how these cars are going to cope with the unexpected.
So how do we get closer to the dream of a self-driving car while keeping people – both inside the car and out – safe?
[MUSIC OUT]
[sound transition F-18 scorching through air]
Gary Marcus: We reached out to one of the world’s foremost experts on AI, automated systems and safety, Dr. Missy Cummings. She’s Director of the Center for Robotics, Autonomous Systems, and Translational AI at George Mason University. She has a Ph.D in systems engineering. Before that she spent a lot of time flying F-18s as one of the Navy’s first women to pilot in the navy.
Missy Cummings: I tell people, “I don't know how you become a faculty member without first being a fighter pilot.”
Gary Marcus: Yeah, I should have got the memo.
Missy Cummings: It's a dog-eat-dog world. [Fade down]
Gary Marcus: Dr. Cummings has a really unique take on all this. And she likes to speak her mind. When she joined the National Highway Traffic Safety Administration (NHTSA), in 2021, Elon Musk was not happy - so much so that some of his fans attacked her on Twitter.
Missy Cummings: A lot of people think that I'm this naysayer of driverless car technology and I'm just trying to stop driverless cars. And I'll tell you that’s so far from the truth because I wake up every morning, especially in the past six months, really willing the self-driving cars to come along faster than they are.
I have a 15 and a half year old and I'm trying to teach her how to drive. And it is, it's a frightening experience as a parent. And I recognize all the, the changes that need to happen in her brain for her to be able to actually deal with multiple input streams of data and then be able to find the right course of action.
Gary Marcus: When I caught up to her, she’d just finished her term at the NHTSA.
Missy Cummings: At that exact same time every morning for the National Highway Traffic Safety Administration, it was my job to read the accident reports of all self-driving and ADAS equipped cars that crashed on autonomy the day before.
Gary Marcus: Most major car companies have technologies called ADAS — for Advanced Driver Assistance Systems. And Tesla sells additional self-driving features on top of that.
Missy Cummings: Every day I read these accident reports. And so, I think I'm gonna have PTSD because I was being scared by my daughter on a nearly daily basis. And, um, honestly, I'm riding the bus a lot more now. [laughs]
Gary Marcus: How much were you reading? How many, you know, accidents big and small, were there on a given day?
Missy Cummings: It really would kind of depend on the day. Some days there would just be a couple, and some days it would be 20, 30. And there's no question that there’s under-reporting. It's not necessarily intentional on company’s parts, but they don't have to report until they find out.
[MUSIC IN]
[ARCHIVAL]
Reporter: The National Highway Traffic Safety Administration put out its first report on self driving car crashes. Since July of last year, it says there have been 130 crashes across the country. The highest? 90 crashes in California.
Reporter: The 8 car crash on the Bay Bridge last month is blaming the full self-driving feature of his Tesla.
[ARCHIVAL END]
Missy Cummings: After spending a little over a year with the Biden administration as the senior safety advisor at the National Highway Traffic Safety Administration, one of my big lessons learned is the lack of mature systems engineering approaches generally across all of automotive transportation.
[MUSIC OUT]
Gary Marcus: That’s Dr. Cummings’ speciality – systems engineering. Basically, how the different parts of a system work together. Say you’ve made a regular, non-self-driving car. You’ve got this engine and this transmission. Do they actually work together? Is there a weird circumstance where the engine gets too hot and the transmission doesn’t work anymore? The more pieces you have to fit together the more complicated things get.
Missy Cummings: We're really looking at when you're changing the design of a system to include artificial intelligence, which really fundamentally includes non-deterministic technology.
Gary Marcus: In other words, instead of having a system that has an output that’s the same every time, its behaviors will change.
Missy Cummings: So we're going to have systems that no longer perform predictably, the same time every way, every single time a human interacts with them. And because of that, that introduces new concerns. For example, how do we know that the training data of the system is sufficient?
Gary Marcus: There is another factor with these cars, one that really worries Dr. Cummings – the people inside the cars. How will the drivers interact with this technology?
Missy Cummings: When I flew F-18s, about once a month for three years someone I knew died. And it was always because of the conflict between the human and the automation on board. No AI, but just straight up plain old automation. And so that was my real motivation, you know. I thought to myself, “Oh my God. You know, there's got to be a better way.”
Gary Marcus: In the driverless car industry, people sometimes talk about Level 5 self-driving cars. You get in, you type in a destination, and off it goes. For now, cars can’t really handle that on their own. Humans still have to stay in the loop. Sometimes the cars handle things fine, but the human has to be ready for when the cars aren’t up to the job. And face it, humans are not entirely cut out for the job of sometimes paying attention. That became clear in the accident reports that Dr. Cummings was reading every day.
Missy Cummings: People are just getting up out of their seat or not necessarily getting into the seat next to them or getting into the backseat, but they're kind of getting up and moving around. And I, and I call it the booty bump. people are hitting the steering wheel with their hip or butt and then that's causing the car to go careening off the road. And so that what that suggests is people have a lot more faith in this technology when they're told that either you could be hands free or that the car is full self-driving. People hear that and they think, “Oh, well, you know, I can reach over and make a sandwich or, you know, do something with my coffee or scold the kids in the backseat.” And then the next thing you know, uh, you know, they're departing the highway at 75 miles an hour.
Gary Marcus: This is a huge problem, and not just for the average driver.
Missy Cummings: I was at a conference recently and a very senior person in a transportation department told me, “Well, the best thing about Tesla is that's the time when I can text and drive.” [laughs] No, that's absolutely not that like… Safety is their game and they're a transportation professional. And they said this out loud, not just to me, but to like a group of 20 other people to say, “Yeah, mm-hmm, this is time to text and drive is when my Tesla is on autopilot.” And I think that that statement right there just kind of captures it all.
Gary Marcus: It’s kind of mind-blowing that a transportation professional would think that it’d be okay to text and drive. The technology is so obviously not there yet, but it’s not just that one person. It’s human nature.
Missy Cummings: When the car gets itself into trouble, people have an emergency reaction. They know that the autonomy has failed.
Gary Marcus: So they finally realize something has gone wrong, and sometimes their emergency response actually makes the situation worse.
Missy Cummings: Then they have to do something. And they will do two things most likely. Number one, they will grab the steering wheel and over control a response. But then one of the things we're also seeing is people will stomp on the wrong pedal because their feet are not in the place that they remember them to be, and so we are seeing people actually hit the accelerator and causing an even worse accident because their feet were somewhere else.
Gary Marcus: So now they're totally out of position, they're, they're kind of mentally out of position and physically out of position.
Missy Cummings: Totally out of position. That's right. These things are connected. And so look, I hate to be the Debbie Downer of technology, uh, because I'm a roboticist and I'm a futurist and I want to see this technology, and I really, really, really wish they could get it together in six months before my daughter gets her a license. But they're not going to, and we need to recognize that it's going to be a longer slog. And so we need to make sure that we help people stay alive and stay out of trouble. And telling them that they can be hands-free is wrong.
[MUSIC FADE OUT]
Gary Marcus: I just wanna point out, you're as dark in your views about humans as you are about the cars. Right? You are not coming to this from a perspective of like, “Technology is bad.” You're coming to this from a perspective of like, “Oh my God, humans are terrible drivers. They could be a lot better. We could build a better technology, but the technology that we have is not in fact better.”
Missy Cummings: Not close. Not close. Look, humans, we just have the ability to reason under uncertainty, which is missing in autonomous vehicles right now.
Gary Marcus: There it is again, reasoning. AI uses a lot of data, but it can’t really think.
Missy Cummings: It's the worst of all possible combinations. We've got autonomy that drives like teenagers.
Gary Marcus: [Laughs]
Missy Cummings: And we've got people that have regressed to their teenager abilities.
Gary Marcus: For Dr. Cummings, it goes back to the lessons learned in aviation.
Missy Cummings: You know, we need to recognize that this was a choice that we made and the aviation community chose not to consider the human as a legitimate subsystem until automation came along. And then we started having all these crashes both in Airbuses and Boeing.
You know, there's just a long, long history of the war between the machine and the human and the cockpit and how many people had to die. So it’s funny, that was 30 to 40 years ago. We're there now in surface transportation. And if you think it's bad in cars, it's gonna be worse when we start putting trucks, semi trucker trailer trucks out on the road with no drivers. So we've got to start taking this seriously that the human can save the day in a lot of cases. But you're assuming that that human is paying perfect attention, and that is, that's a very bad assumption.
Gary Marcus: So I guess what the aviation industry figured out is that you have to engineer around the person and you have to expect that the person is fallible and like that's, that's a starting point.
Missy Cumings: Yeah, and I wouldn't even be that negative. I would say you need to design these systems to promote the strengths and weaknesses of each agent.
Gary Marcus: We can’t take the risks lightly with driverless cars. People have died. The first we know of was in 2016, when a Tesla Model 3 on autopilot collided with a truck turning left onto a Florida highway, killing the Tesla’s driver. And there have been others since.
Missy Cummings: I don't think, for example, self-driving car companies should be allowed to put any of their cars on the road at all, without a safety driver, unless they've done these hazard analyses. And that is where you go through the operations and figure out what are the most critical operations that could go wrong. And they should have to show a federal agency that we have done comprehensive hazard analyses and we've thought this through, and we understand where are likely failures and, and we've put in mitigations. So those are two immediate changes.
Gary Marcus: And right now, what's the requirement?
Missy Cummings: Nothing. No, nothing. You don't have to do any of that. You just do it. You just get to put 'em on the road. Uh, so yeah, you don't have to, you don't have to show that you've thought this through at all. So as long as the car meets the Federal Motor Vehicle standards, you can do whatever you want.
[MUSIC IN]
Gary Marcus: And the thing is, the regulations we need aren’t here yet. In the early 20th Century, cars became common and traffic deaths began to rise. Automakers started to offer safety features. But they weren’t enough. The biggest safety improvements didn’t happen until the government took action, with seatbelt laws and the Highway Safety act. Seatbelts alone have saved more than 300,000 lives in the US.
The question is: Do we want to wait on the manufacturers to regulate themselves, or to start thinking now about how to make the industry as safe as it can be?
Missy Cummings: How should we be auditing these systems? If we have self-driving cars, for example, and they're routinely getting software updates, how do we know whether or not that's gonna interfere with the computer vision system?
Gary Marcus: Missy makes a good point. There’s a kind of software infrastructure we need, to deal with the fact that AI for these systems is constantly evolving. We want to make sure we are taking steps forward, and not backward. And we need systematic ways of measuring that.
Missy Cummings: We need an entirely new field of maintenance workers for AI. Right? So its kind of a, you know, in an aircraft world, we're used to thinking about a maintenance department where we've got guys who change out engines and change the oil and upgrade the software. And they all make sure that my airplane, uh, with all the newfangled software and hardware updates works.
Well, now we need that same kind of group of people to start doing maintenance of AI. And how do you know for sure that the images that you used to train the system are still sufficient for the environment that the car is in? So we need a whole new workforce that they exist just to stress test artificial intelligence systems.
Gary Marcus: Ultimately, the question is really this: how can we implement these solutions in the world that we live in?
Missy Cummings: When I was a teenager, my mother used to say, “Well, if everyone else ran off a cliff, would you?” And I think that's what we're seeing. We're seeing cliff running behavior in the automotive industry, because then everybody saw how popular Tesla was. And then many other manufacturers are now embracing the idea of hands-free driving. I mean, you can look on, on multiple websites, TV commercials, hands-free. Hands-free, hands-free. You know, we have made a very serious error as a society to endorse hands-free technologies. And I hope that the manufacturers start to rethink this.
Gary Marcus: I want to pause on a word Dr. Cummings just used. Society. As a society, we’re not deciding for ourselves what we want from driverless cars, from our chatbots, from our medical AI. Too often, we are leaving those decisions to Silicon Valley.
Here’s Cade Metz again.
Cade Metz: The way Silicon Valley works is once one person starts saying that, everybody has to start saying that. That's the way you attract the money. That's the way you attract the talent. Even if you're fully aware that this type of thing isn't going to happen immediately, you've got to say that because everyone else is saying that. It's just part of the way that Silicon Valley operates. That's what it's designed to do, is do things that have never been done before and to be optimistic, because if you're not optimistic, you're certainly not gonna do anything that hasn't been done before.
[MUSIC IN]
Gary Marcus: And there are often huge financial dividends for that optimism. But that doesn’t mean it’s the right way forward. I’ve been worried for years that the industry has been overpromising, placing billion dollar bets that if we just got more and more data, then the edge case problem would go away. And it just hasn’t happened.
Cade Metz: But what happens is this, this bubble builds, right, where everyone is saying the same thing. And if you do see limitations, you're wary to say the least, if you're not frightened, to, to voice those concerns. And if you wanna raise the money and attract the talent, you not only are wary of voicing the concerns, you are absolutely not gonna do it. You're gonna do the opposite. You're gonna talk this stuff up like everybody else, and, and you're gonna say, “It’s just around the corner.” That's just how, how, how businesses operate. Particularly in Silicon Valley.
Gary Marcus: In the next episode, we look at ChatGPT and the sudden explosion of generative AI. Generative AI can draw you a picture. It can write a decent undergraduate essay. It can even pass a bar exam. But can it tell a good joke?
[Montage]
Naomi Saphra: The problem is that comedy comes from surprise. And language models do not have this natural ability to make things surprising.
Bob Mankoff: You usually will get word play.
Gary: Is it good word play? I mean, is it stuff you'd be proud of, or it’s like?
Bob Mankoff: Stuff I’d be proud of? No. Stuff the, the ordinary, fairly un-humorous, not you, cognitive scientists would be proud of?
Gary: No offense taken.
Bob Mankoff: Yeah. (Laughs)
Gary Marcus: That’s next on Humans vs. Machines. I’m your host, Gary Marcus.
[MUSIC OUT]
[CREDITS]