Podcast / Transcript and Show Notes

Season 2, Episode 2: Technology and Our Brains

Listen now

Transcript for Season 2, Episode 2: Technology and Our Brains

Kurt Andersen: Welcome to The World as You’ll Know It. I'm your host, Kurt Andersen.

We are discussing the future, and this season the shape of things to come specifically as a result of technology.

I don’t have to tell you the ways we live now, thanks to ubiquitous devices, and connecting with friend, and strangers, digitally more than in person and getting information 24/7 -- and misinformation. And having our personal digital lives constantly mined and sold to companies who want to sell us things or make us behave differently. 

But how is all of this reshaping our minds and how we think? And the minds of younger people who’ve only lived in this networked digital era?

To explore all of that I am talking with Alison Gopnik. She is a professor of psychology and philosophy at the University of California at Berkeley and one of the best-known cognitive scientists around. She’s famous for her research on children’s minds especially, and books like The Scientist in the Crib and The Philosophical Baby. She has also been collaborating with computer scientists working on artificial intelligence to try to help them figure out how to program AI to learn more like children do."

Alison, welcome to The World as You’ll Know It....

Alison Gopnik: Glad to be here. 

Kurt Andersen: So I want to start with but then extend far beyond your specialty, which is to say babies and children and how they think and learn and all of that and the effects of digital technology on them. Is there any evidence, studies, experiments or otherwise that you trust, that total Internet natives, I'm not just saying people younger than you and I, but like twenty five and younger, who grew up with broadband, grew up with smartphones, are mentally different than their elders?

Alison Gopnik:  Yeah, I think that's a really good question. And one thing that's interesting about thinking about childhood in this context in particular is that one of the things that I argue is that in some sense the function of childhood is to allow technological and cultural change. So what's happened not just recently, for as long as we've been human, because we have culture and technology, those are two of our biggest human characteristics, is that each new generation gets to revise and change and shape and figure out the environment that it's in, in a way that's different from the previous generation. And the interesting thing in terms of our more modern technology and life is that there's this kind of trope where every time a new technology gets introduced, everyone's convinced that it's going to be terrible for kids. And it's funny that the sort of, it's going to be terrible for kids comes in before the, it's going to be terrible for all of us. 

Alison Gopnik: I just saw a wonderful study, a longitudinal study that was actually done at Berkeley starting in the 1920s, where they started interviewing a bunch of families and their children. And sure enough, in the 1920s, what they said was movies. Movies are just destroying the next generation. That's, you can see they spend all their time going to the movies and it's destroying their minds. And in the case of television, it occurs to me that the debate and the inquiry about whether television was good for you or not has lasted longer than television. So now we don't have kids watching broadcast television, and we're still saying, well, you know, there's not actually very good evidence one way or another, but it sure doesn't look like television has these negative effects. And by the way, the same thing is true with the Internet. So there's already been a bunch of studies about adolescents in particular looking at relationships between screen time use and mental health and just what comes out over and over again, it sort of doesn't matter what technology it is, it doesn't matter when the study’s being done: there might be some little effects around the edges for kids who are already vulnerable, but you just don't see big consistent effects. And that's oddly discordant with the fact that everybody's first hypothesis is always that they're gonna be these terrible effects. 

Alison Gopnik: So that's like where people start and then you do the studies and it's, I don't know, maybe a teeny bit about the edges, but you just you just don't see it. And I think part of that is exactly because the way that human childhood brains and adult brains deal with the world is really different. So an adult brain, one of the ways that I describe this is the difference between exploration and exploitation. So adult brains are really designed to act effectively. So they have lots and lots of assumptions and knowledge about how the world works. And then they use that knowledge that they've accumulated to do things and do things effectively. And childhood brains are really different. Childhood brains are really designed for learning, for what neuroscientists call plasticity, changing, figuring out the environment around them. Not terribly good at doing things like, you know, putting on your jacket and getting to preschool or even if you're a teenager actually going out in the world and acting effectively. But the result of that is that when you take that kind of brain and put it in a new environment, with the adult brain it’s kind of disrupted. It's confusing, it's distracting. It requires a lot of top down attention and a lot of top down work. Whereas that's what child brains are for, is to take that new information and take it in and make sense out of it and make some kind of coherent picture. 

Kurt Andersen: Well, and it seems to me that children using these technologies for better or worse, but have they changed the way minds work of little kids who are staring at screens in a way that my thirty two year old children didn't, right? But did TV is, I guess, a question change the way people actually think or consume. Or do these more intimate ubiquitous devices actually change the way brains work, or is it, “Nahh, it's another thing” no different than books when you were a kid or television when I was a kid or whatever? 

Alison Gopnik: Well, let me start with a parable that I use in “The Gardener and the Carpenter.” So here's a story, one of these creepy dystopian stories about a device. There's a woman and from the time she's two or three, she's on this device all the time. That's what she spends her time doing. She goes to school, but instead of paying attention to the lessons, she's got the device, she's looking at the device. Most of her experience of the world actually comes through this device. So she's actually spending more time in the world of, you know uh, someone walking into a ballroom in Napoleonic Russia than she is in her real life. And by the time she becomes an adult, literally every room in her house, including though this is a bit embarrassing, the bathroom, has the device in it. And she won't do anything, including going to the bathroom unless she has a device. Her, her children, one of her children has a concussion and has to go to the hospital. And the first thing she thinks is, “Oh, I need to have the device with me.” And then, of course, she works really hard to get her own children hooked on the device as well. 

Alison Gopnik: And then if you do neuroscience, you discover that the device has completely reshaped her brain so that parts of her brain that were once used for other things like vision or making their way through the world now are completely dedicated to the device. And if you put the device in front of her brain, it automatically engages in this effort of taking the device’s input and using that to shape what it thinks rather than its own input. There's nothing you can do about it, just does that, completely automatically. 

Alison Gopnik: OK, well, the story that I've told you is absolutely true. And it's a story about the book. That's exactly what the book does. There's studies that show that by the time you're an adult who's read all your life, you literally can't ignore print. If you see, you know, the word red printed in green, you have a hard time actually figuring out that it's red rather than green. If you look in your brain, big parts of your visual system are now devoted to print, et cetera, et cetera, et cetera. And from a historical period, there's at least arguments that many things that are very distinctive about us, like Protestantism, like the idea of individual privacy, that those things came from having print, came from actually experiencing the world through this medium of print.

Kurt Andersen: But, but I'm interested in how, as you say, reading itself, for now thousands of years, changes the way people perceive and think and consciousness. Do we know yet how these new devices and the always-on-ness and all the rest of it is changing the way people think? 

Alison Gopnik: I think we don't and we couldn't. And if anyone tells you that we do, then they're lying because we'd have to wait for 20 years to see how people were functioning or if children are functioning any differently than any differently than adults. I think it's perfectly reasonable for us as adults to say, ‘look, here's what we see with this technology. It's good in some ways and bad in other ways. And it's kind of addictive in this dopey way. And I don't like that and I don't want my children to experience it.’ But of course, that was true about TV. That was true about books, right?

Kurt Andersen: Right.

Alison Gopnik: So plenty of adults could properly say, you know, ‘you shouldn't be there with your nose in the book for hours and hours. You should be out playing sports and playing with the other kids outside.’ That's a perfectly reasonable thing to say. I don't think there's any reason to believe that it's drastically going to be different with this technology than the other technologies. Maybe, you know, this time it’s going to be different from all the other times. But I don't think there's any particular reason to believe that's true.

Kurt Andersen: One thing and I don't know if it's going to doom anybody, but it's the thing I've noticed in the smartphone era for 14 years now. Parents and caregivers giving a lot less attention to the babies and toddlers in their care because of phones. It's just -- I don't, I haven't seen the studies, but my daily anecdotal experience of that is that it's true: a problem? 

Alison Gopnik: Probably not in the sense that, again, you know, as an adult, you're noticing someone looking at the phone. You wouldn't notice a mom going to talk to the person next to her, right? So what you notice is that here's the mom talking to the baby, but then she's looking at her phone. For most of human history, mothers have been working at the time they had babies. 

Kurt Andersen: Yeah, yeah.

Alison Gopnik: If you have a baby on your chest while you're digging for roots or you're planting, right? I think there's lots of evidence that for most of history, parents have been busy. They've been paying attention to lots of different things. They don't have this kind of narrow, focused attention just on the baby. So it’s not clear that what's happening with the phones is qualitatively different from what's happened for most of history, when grown ups were busy and babies were learning by looking at the busy things that the grownups were doing.

Kurt Andersen: Yeah.

Alison Gopnik:  It's more that it's something that's different for us so it becomes very very noticeable. 

Kurt Andersen: You're reassuring to all parents, grandparents and everybody on these scores. But let's talk about adults. Maybe I can get you to admit or agree that these technologies are not good for adults, because I think it is misframed and, “oh, it's going to hurt the children, oh, it's going to hurt the children.” No, but when we see, for instance, and most obviously, the exciting falsehoods and alternative realities that are propagated in these various information silos that we now have, that seems like a big problem that we need to figure out how to begin fixing, no? 

Alison Gopnik: No. Well, I think, you know, again, I mean, I don't want to sound too--

Kurt Andersen: Dr. Pangloss here?

Alison Gopnik: No, not Dr. Pangloss, but it's just, “It's, well on the one hand and on the other hand.” So when print first became not just first appeared, but first became cheap and something that everybody could use publicly and sort of ubiquitous in the 18th century. One of the things that happened was that there was this great explosion of scurrilous, libelous misinformation. There's a wonderful book by Robert Darnton called “The Literary Underground in the Ancien Regime.” It's one of my favorite one of my favorite books way before anything about the Internet, describing how, you know, there was Marie Antoinette pornography and, you know, the Marie Antoinette let them eat cake, that was an Internet meme that nowadays we'd have a fact check about, “No, Marie Antoinette did not say let them eat cake!” So there was this explosion of disinformation and misinformation. 

Alison Gopnik: And the punchline is -- and, you know, like the only bad thing that happened was the French Revolution, right? So, um, other than that, there weren't any negative effects. Now, obviously, the French Revolution was a big historical event with lots of negative consequences that came out of that. But again, it's not as if this is the first time that this has happened in human history. And there's a kind of back and forth between on the other hand, when you start having curated a less Wild West kinds of forms of media, right, so then you end up having regulation, you end up having just a few sources of information, which was kind of like our media ecosystem, say when when you and I were growing up. That has a lot of advantages in terms of not having a lot of scurrilous nonsense around. But it has disadvantages because that meant that the three networks could control the information that people heard. And there were things that people didn't hear because they weren't on the three networks. That's not to say everything's fine, we shouldn't do anything about it. What we need, and I think what we need and, I think a lot of people in the tech world themselves are realizing this, we need the equivalent of code. We need the equivalent of regulation. We need to figure out what are the principles that we could do to make the system work better. 

Alison Gopnik: And that's a really important, urgent problem. But again, it's not different from the important urgent problem of regulating technologies that we've had as long as we've been human. My grandson comes to our house and the two things that he does are he does V.R. for a half an hour. And then we read Lord of the Rings for two or three hours. And he pointed out, you know, the thing that's great about Tolkien is he has all this stuff about exactly where you turn left and you went south and then you came out to another place after three miles. It's just like he does V.R., but he just does it with books, right? He puts you in the alternative reality. 

Kurt Andersen:  No totally. Absolutely. And the maps. I mean, he he was he was almost a game designer ahead of his time. 

Alison Gopnik: You can see how much he would have enjoyed the idea of having this alternate universe that then you could... 

Kurt Andersen: Yeah.

Alison Gopnik: Uh, I mean he did have the alternate universe. So, again, this is not to be Panglossic about it, but just to say, I think, you know, as with nourishment or anything else, what we want both as adults and for our children is to be able to have a kind of balanced diet of these different kinds of forms, and to have enough both social institutional norms and, you know, legal political norms and regulations to to bring out the better parts and get rid of the more dangerous parts. 

Kurt Andersen: And you talk very interestingly about the sorcerer's apprentice problem, which is to say that, you say A.I. thinks it's doing what the creator wants, but it gets it disastrously wrong. And the way Mickey Mouse flooded everything with the broom, that's what AI is doing. And, and as you say, it's because people click on things that outrage them and make them scared. And that's what looks interesting. And so that's what they're given. And that is a real problem. You know, yes we have to figure out how to regulate that. But how does that happen when these businesses, being businesses, make their money by selling ads which depend on maximizing clicks, so, yeah, let's make people upset because that's what makes them keep clicking. Right? 

Alison Gopnik: Right. Right. Yeah. I mean, so the apocalypse is, there's a famous thought experiment by Nick Bostrom talking about AI. So one of the other things that I've been doing is working with a lot of people in AI, thinking about how AI is going to work. Bostrom’s analogy is: Imagine the paper clip apocalypse. The paper clip apocalypse is that you tell the AI to make paper clips and it goes out and turns everything in the world into a paper clip. Now, current AI aren't in that ballpark, but my colleague Tom Griffiths has pointed out you could sort of say something like that has happened not with AI, but with social media and attention that we sort of have the attention apocalypse where we told our media it seemed like a good idea to say, “Give me something that I want to see. Give me something that I'm interested in.” And then it turns out that what it did was sort of the equivalent of producing too many paper clips where it gave us things that we wanted to see, even if at a meta level we didn't really want to have those be the things that way because they're --

Kurt Andersen: Because they outrage us and scare us, yeah, right. 

Alison Gopnik: Exactly. That’s right. And I think that's a genuine unintended consequence of what looked like, you know, ‘if you said to someone, well, what kind of algorithm would you want?’ You'd say, ‘well, you know, show me things that I'm interested in.’ That seems like a sensible strategy. But it turns out to have this really negative effect, which is that you end up being interested in things like the outrageous or the or the frightening. And I think that's that's real. And there's interesting questions. For instance, the economist Paul Romer has argued that we should set up financial incentives so that instead of having this advertising model, which is sort of just by accident, ended up being the model that we had for that we had for media that we have something more like a subscription model where you could say, ‘this is someone who I know is a reliable source of information. And so I'm going to pay to have that reliable source of information. I'm not just going to be clicking on the things that are going to get the best advertising revenue.’ The founders of Google early on said, ‘oh, we can't do this. If it's going to be advertising, that's going to be a disaster. We can't do it if we're if we're going to be making the decisions about this based on advertising.’ But, of course, that turns out to be the business model. 

Kurt Andersen: It's so true, exactly what you said, if everybody just had to pay a subscription to Facebook and Twitter and all the rest this wouldn't go away. There'd still be misinformation and conspiracy theories and nonsense, but it would mitigate it. And it's these kinds of choices that we see along the way, like, well, no, we didn't want to do advertising, but that's the way it worked out, that we have to recognize that these are choices that are made along the way to go more toward Utopia or to dystopia, you know?

Alison Gopnik: Yeah, and Romer for example, has this proposal that that's being considered right now to have taxes on digital advertising, which is something that we hadn't done before. That would be an obvious thing that could let us get more income to be able to do some of the things that we that we need to do. And that would act as a disincentive for the straight advertising model. So, again, I mean, I think these things have to be done and you have to put some political and some sociological and some psychological work into doing them. But I don't see any reason to think that they can't be done. Or that you know this is, and you know again If you think about the very strange fact about the 20th century, which was that ads for your couch turned out to subsidize investigative journalism. That was a pretty contingent outcome. And I think we need to figure out other ways of doing that. 

Kurt Andersen: Well, one of the ways in which this seems different than all the rest, is the fact that software and algorithms are designed to addict you. And you can say, ‘yes, and so did pulp fiction writers, they were trying to be addictive.’ But isn't this different or is it not different? Are they no different, just more technological and all-consuming than other forms of commercial media in the past? I don't know.

Alison Gopnik: The same kind of conversations -- remember the hidden persuaders? The the same kind of conversations about the way that advertising was corrupting your, uh your needs that the whole consumer society was about persuading people that they needed the latest soap powder when the soap powder was not actually going to change your life. And there were all sorts of subtle things that advertisers figured out about how to have the right colors and have the right people that would make you want the things that they were advertising. I mean, hijacking the human reward system is something that we've been doing as long as we've had a human reward system. And again, that doesn't mean that we aren't responsible for trying to figure out how we deal with the fact that someone's trying to hijack our reward system now. But I don’t think it’s a qualitative difference from the kinds of structures that we've had in the past. 

Kurt Andersen: An interesting point you made that I hadn't really thought about is, is that social media, and the Internet in general, allow us, as humans have never had before, by orders of magnitude, all of these people with whom they are connected in some kind of social interactions. You know, whether it was hunters and gatherers or even people in cities. We have never had such ability to be connected with so many people, friends, acquaintances, strangers -- that is a really new condition, and right, and we don't know how that’s, what the effect of that is going to be. 

Alison Gopnik: It's interesting that when you talked about the kinds of things that people worry about with the Internet, they're very much like the things people worry about with cities. You're in a crowd, but you're alienated and you're lonely at the same time, you can interact with many, many more people. And we have lots of reason to believe that cities, for example, allowed innovation just by sheer virtue of being cities. That you get more innovation in, when people are in this literally and physically, in the same place. But you get more alienation. You don’t have those, you have this kind of trade off between what people sometimes call strong ties and weak ties. You don't have the kind of strong ties that you have in a family. 

Kurt Andersen:  That's actually a very interesting comparison. But the difference is, and it's something I think about a lot in terms of the anger and contempt that people so often feel free to express on the Internet, that I don't think in real life most of them would because it's real life and the person is there and your neighbor or whatever -- the people online seem less real. They seem like characters in a game and therefore I can call them horrible things or mob them because there is no physical proximity. 

Alison Gopnik: Well, I think this is a continuance of what happens in cities, right? So one of the things that happens in cities...it’s funny my my husband comes from a very small town in New Mexico and I grew up in big cities in Philadelphia, and London. And he gets very upset because I'll walk down the street and I won't make eye contact with the people who I'm walking past on the street. I don't know if you are a New Yorker, Kurt, but anyone who grows up in a big city, one of the first things that you learn is you make eye contact with that small group of people who you actually know and you have to glaze out over the other people because you couldn't be in those kinds of close personal relationships. There's that problem about when you get lots of people interacting, there's a natural lack of the kind of empathy that comes when you're in close personal relationships with people. I think that's, that's a real important true thing about how humans work and how to deal with that -- again, in every culture, with every technology is a real, genuine, deep problem for people. 

Kurt Andersen: Right and I’ve often thought that digital media, social media, makes people treat other people online as...digital characters, less like human beings. Which seems like a kind of flip side to the fact that we’re getting dependent on and in love with our devices and will do so more and more as AI develops. I wonder if we’re not going to start treating certain software and devices less like, I don’t know, less like this microphone or a car or a toaster, and more like at least like pets, right if not humans? 

Alison Gopnik: I think that's a really interesting question. And, I think what we would tend to say from a developmental psychology point of view is that it's interactivity that's going to be the thing that makes the difference. So if you can really feel that you're interacting with uh and not just in the sort of pseudo way that a chatbot or something like that sort of pretends to be interacting, but if you were genuinely interacting, or what you were doing was interacting with the with the other agent, I think people will start thinking of them as being agents that are out in the world in the same way that they are, and much the same way that we welcome, you know, animals. So, humans do tend to do that. And I think as computers become more sophisticated, that's one of the things that we’ll do. But we can decide whether we're going to do this or not, but if we're thinking about the future, this gets back to the point about the paperclip apocalypse. My colleague Stuart Russell has pointed out one of the real questions is going to be how will we get a computational system to know what our values are, what we want, um how can we set up incentives? Again, this is like in the social media case, how could we set up the algorithm so that it doesn't do the bad things? 

Alison Gopnik: You know, there's a whole field of AI ethics, which is just about how could we change the algorithm so that they won't be biased, for example, that they'll be more productive for, for social good. 

Kurt Andersen: Right.

Alison Gopnik: But one of the problems with this alignment problem that I think is interesting is if you think again about -- And I think a lot of this, Kurt, you know, if you just think about the phenomenon with human beings, that we have children and we have a new generation. That every generation is a bit different, grows up in a different environment, has different tools, has different technologies, has different social structure… These problems about how is it that we're going to have a next generation that will have the values that are going to be beneficent instead of malignant, right? 

Alison Gopnik: That's just so baked into the human condition, so baked into our relationship with our children, with our teenagers um who are often the people who are kind of at the cutting edge of the next change. And I think one of the things about caregiving that's very neglected is that to be a parent is to be able to look at another person, another sentient agent and say, “you know, I want you to have a different set of goals than I do, and I want you to even have a different set of goals than I want for you,” right? That’s my job as a parent is to create a new intelligence that will be adapted for its environment, not necessarily adapted for my environment. And if we're ever going to have AI that's actually going to be able to do that, be able to actually generate new values in a new situation rather than just go with their predetermined values -- and at the moment, we're not even in the ballpark of being able to do that -- we're going to face this problem about how do we how do we get a system, whether it's a machine or it's a new generation of humans, to to be able to adjust their values, adjust what they think, adjust what AI people call their objective function of the things that they're trying to go out in the world and get, in a way that overall is more beneficent rather than more difficult. 

Kurt Andersen: Right. I have, I have a question before we finish about about AI and I get that we're nowhere near general artificial intelligence. But, you know, when I see how AI is as good at facial recognition as humans, better than doctors at diagnosing many things and so on and so forth. Do you have any reason to doubt that machines will get there in some fashion? 

Alison Gopnik: My hunch is, there's really only one creature that we know is conscious, namely me, right? But I think there's some interesting work in evolutionary biology. Peter Godfrey Smith, who's a philosopher of biology, has this wonderful new boo”Metazoa,” where he is arguing and other people have argued, that the thing that really is associated with consciousness is a certain kind of function of the way that you are in the world. And in particular, he suggests that during the Cambrian explosion, you see these creatures that start to appear, you know, to start out with, they're just like little shrimp and underwater creatures. But they can run after things. They have arms. They can go out and catch their own food and they have claws and they have -- they chase each other, they can be in conflict. And he suggests that gives you a kind of attitude towards the world when that's true. And the interesting question is, if you had, again, we don't have AIs that are anything like that now, but if you had an AI that could actually go out and function in the world, something more like a genuine robot, would that start to be in the category of things that would have consciousness? And I think it's just sort of an empirical question: does it depend on having, um, on being made out of carbon rather than being made out of silicon? You know, all the examples we have are made out of carbon, or is it something about these functional characteristics, like having an attitude in the world, having to have goals, having to accomplish those goals rather than consciousness. So there's a tendency among philosophers to think that sitting in an armchair and doing philosophy is the quintessential example of consciousness. And I think that's probably not true. 

Kurt Andersen: So the way we have come to think of different kinds of intelligences in humans and in other animals in addition to us, that's perhaps a way to think about consciousnesses, plural. 

Alison Gopnik: I think that's exactly right. And one of the things that I'm writing about now and thinking about a lot is the diversity of intelligences, the incredible range of different kinds of intelligence that we see across different ages of human beings. The difference in the intelligence of adults versus children across different species, the intelligence of an octopus versus the intelligence of a primate or a crow. Those are all creatures that are amazingly intelligent, but they're all intelligent in really different ways that reflect their ecology. And I think even adult humans find ourselves being intelligent and in states of consciousness that are really different at different times. The kind of narrow focus on what an outcome is going to be is really different from when we're meditating and we're open to all the things that are going on around us or when we're playing, or when we're thinking in a fictional way. So I think what will happen if we actually get more sophisticated machines is it's unlikely that they'll be a sort of sense of: does this one have consciousness or not, as if it's a binary or is it intelligent or not? It's that different kinds of creatures with different kinds of functions, different kinds of computational complexity, are going to have different kinds of intelligence and probably have different kinds of consciousness. 

Kurt Andersen: I'm glad to hear you say that because one of my new hobbyhorses is that this lovely idea of non-binary-ness should be extended to all kinds of things beyond gender, gender identity, to, to really thinking about almost everything in terms of continua. 

Alison Gopnik: And I think that's one of the things that's come out of thinking in psychology, is that the diversity -- this has been one of my points about parenting, for example, is that we tend to have this model that what we want is, we have a particular outcome and we're trying to bring about this outcome, and if only we do the right things, we're going to get this particular outcome. And I think what biology tells us is that often the way that we have innovation, the way we adjust to the world around us is by diversity, it's by trying out lots of different things. It's by things being sort of noisy and random a lot of the time. 

Alison Gopnik: Computer scientists talk about something that I think is a very deep distinction between exploration and exploitation. And those are different, really different kinds of things. Being able to do something really effectively means narrowing your focus, having a clear goal, doing something for that clear goal. And that's very different from exploring the possibilities, trying things out, generating a whole lot of intellectual diversity. And you need to have both those things to be able to adapt to an environment. There's an idea I like in biology called evolvability. So even now we're just talking about evolution, that one of the things that happens is that you evolve evolvability, so you get creatures that are designed to have more genetic variability so that they can be more sensitive to the environment around them. So I think diversity isn't just sort of, you know, a slogan, it's something that our biology tells us is really foundational, especially when what you're trying to do is adjust to new environments, to innovate, to, to deal with novelty. And I think babies and young children are just amazingly good at that kind of intellectual and psychological diversity. 

Kurt Andersen: So us, adults, with these radically new technological environments over the next decade or two. How do we not just adjust to all of that novelty but make the future better? Play our cards right with technology so we end up closer to utopia than dystopia? 

Alison Gopnik: You know, Steven Johnson has this book that just came out about the extension of life and Steve Pinker had a similar kind of book. And and I have things to argue with. But you know what? Two hundred years ago, not even three hundred years ago, two out of five kids died before they were five years old. And I sort of feel like if you want to know, are we getting closer to Utopia or are we getting closer to dystopia, I feel like that statistic, you know, like all by itself, nothing else. Just if you say, look, here's what's happened, your child is less likely to die. Like, I think I'll take that right. I'll take that. As you know, I'll take I'm bored and irritated by Twitter on the one hand. But then, you know, my kids aren't dying on the other. 

Kurt Andersen: No, and that's all science and technology.

Alison Gopnik: Exactly. So that's the fact that the science and the combination and this is a nice point in Stephen's book, the combination of the science and technology and then this very unsung bureaucracy, public health, regulation, you know, making sure that the water is clean in your, in your sewers, making sure that your milk is pasteurized, I mean, all these very dull everyday civil service kinds of things combined with the science, that's made an enormous difference in these really foundational parts of our life, like do our kids die young? Not to mention, you know, do we die young? Do we have accidents. Do we, you know, die of lightning strikes? So in that sense I think it's, you know, maybe we’re never utopia, but certainly some of the things that we take for granted about our lives that are the result of…. Vaccines, right? You know, the fact that we could use the MRNA technology, just developed in the past 20 years, that's a giant positive change. And again, that's not to say that we don't have costs and that we shouldn't be trying to do things to, to counterbalance the cost. But I think it's so easy to forget that really basic fact about, about the very fact that our children are surviving. 

Kurt Andersen: One thing we need to figure out how to do, how to make, is how to raise people to be as glass-half-full as you are, because, my goodness, it's like taking antidepressants, talking to you, so thank you. 

Alison Gopnik: Well, this is my, again, I will end with words of wisdom from my grandson, which I’ll always focus on... We've been reading Lord of the Rings together, which he loves and is very, very excited about. And he said to me at one point, “Grandma, you know, like, I think this is the way stories go. They go hope, hopeless, hope, hopeless, hope, hopeless. But then you always have to end with the hope part.” Um, and I think that's a very good insight about what makes for a good story. So maybe a good podcast too. 

Kurt Andersen: That's the hero's journey in a nutshell. Well, thank you so much. This was, this has been a pleasure. 

Alison Gopnik: Well, thank you so much for having me. Kurt. 

Kurt Andersen: The World as You’ll Know It is brought to you by Aventine, a non-profit research institute creating and sharing work that explores how today’s decisions could affect the future. The views expressed don’t necessarily represent those of Aventine, its employees or affiliates.

Danielle Mattoon is the Editorial Director of Aventine. The World as You’ll Know It is produced in partnership with Pineapple Street Studios. 

On our next episode of The World As You’ll Know It, my guest is Roger McNamee. He is one of Silicon Valley’s most significant defectors. As a lifelong tech investor he was an early adviser to Facebook and a mentor to 22-year-old Mark Zuckerberg, but 5 years ago he had an epiphany –– he says he suddenly realized that Facebook and Google and the rest were enabling the destruction of democracy and civilized society and shared reality. He’s the author of Zucked: Waking Up to the Facebook Catastrophe.

Roger McNamee: We’ve allowed the language of the monopolist to crowd out the language of democracy. We need more risk taking in the entrepreneurial world. We need more risk taking in technology. We need the rewards to go disproportionately to those people who make the world a better place and be taken away from those who are demonstrably harming us.

logo

aventine

About UsPodcast

contact

380 Lafayette St.
New York, NY 10003
info@aventine.org

follow

sign up for updates

If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

© Aventine 2021
Privacy Policy.