Is it even worth having a kid in the AI era?
It’s the question at the heart of The AI Doc: Or How I Became an Apocaloptimist, a new documentary about the promise, peril, and uncertainty surrounding artificial intelligence. Codirected by Charlie Tyrell and Academy Award winner Daniel Roher, the film follows Roher, a soon-to-be father, as he tries to understand how AI works, what risks it may carry, and what kind of world he and his wife are bringing their son into.
Along the way, he encounters both AI’s loudest skeptics and its most ardent utopians. The film features dozens of experts, including CEOs like OpenAI’s Sam Altman and Anthropic’s Dario Amodei, longtime researchers concerned about the future, and critics like Tristan Harris, who also appeared in the 2020 documentary The Social Dilemma about the harms of social media.
Ahead of its theatrical release on March 27, the film screened at SXSW, where AI hype and anxiety have been everywhere. While in Austin, Fast Company sat down with Tyrell and producer Ted Tremper to talk about the film, their process, and why the movie resonates—why, for example, after screenings, when the lights come up, strangers start talking to each other.
This interview has been edited for length and clarity.
What inspired you to create this film?
Tremper: The purpose of the film is we wanted to make something that would—regardless of who you are, where you are—meet you where you’re at and create the invitation to ask yourself, What do I value? What are the things I care about? What are the things I value about my work and life? How can I develop intuitions about how the technology is being built so I can look out for what I need to protect and also the ways that it can benefit me?
We try to be very clear in the film that if anybody tells you there’s a “good AI” and a “bad AI,” and we can get one without the other, that’s not how it is.
You interviewed so many experts, but did you also interview average people?
Tyrell: It was a thing that we were really looking at and considering as a narrative backbone of the film in some ways. For one reason or another, we opted not to use them, mostly because that became the main focus and purpose of Daniel’s character and story in the film. It gave a proxy for the audience to have a person who asked a lot of questions that people wanted to ask.
Were you intentional with the structure—about starting with the AI realists and later interviewing the AI startup experts at OpenAI?
Tyrell: When you’re new to [AI] and you go and you read about it, you’re gonna find the doomer stuff first. And then you’re gonna find that there’s this counter to that—the accelerationists—and then there’s a whole counter to both of those, which is the realists who will look at what’s doing right now. No matter what you look at and in what order, it leaves you in the space of, What are the actual answers and what is the plan?
That’s why in the film we go to the CEOs, because they’re the guys building this. So they should have the plan, right? . . . Surely there are adults in the room who have a plan. And then, as you know in the film, you go there and there is no plan. So that puts the onus on the users, the non-technologists, to ask what we do.
Were you guys disappointed with the answers they gave?
Tyrell: Absolutely. As a filmmaker, you want people to be demonstrative with their emotions. Because that’s just how we’re used to watching films, where things are kind of elevated. Theater takes it way up here. Film takes it here, but in real life if you see someone who just got in a car crash, or watch their house burn down, they’re usually pretty flat. That’s the reality. So filmmaking is a storytelling technique where you have to demonstrate that in a human space so the audience watching knows that person is feeling on the inside.
That really comes through in Daniel’s reactions, where you see his disappointment after hearing people with influence over the pace and direction of AI steamrolling ahead [despite being] worried.
Tremper: There’s a section of the film where people talk about present-day harms. To me, those are the sense-makers, because they are the ones who are actually talking about the impacts. These are people like [AI journalist] Karen Hao, [Mozilla fellow] Deborah Raji, and others. It was super important to have them as part of the conversation. . . . The challenge is that because by and large the conversations have become siloed and adversarial—because of the nature of social media—a lot of them don’t actually realize that they agree on things. And so the sort of landing point of the film is that so much of it is about coordination.
Tyrell: Everyone believes in their own truth that they decided in this space. And it’s really tough for people when you believe in something and you discern it as fact, to understand there could be other truths. And that’s what this technology is: It’s going to be more than one thing at the same time, . . . and everyone needs to kind of regain that understanding. Because right now, especially when everything is broken down in such binaries—good, bad, left, right—we’re so acclimated to this right-or-wrong nature. But people are way more complicated than that, and the issues that we’re facing and the technologies that we’re using are way more complicated.
It seems to be a pretty emotional film for such a heady topic.
Tyrell: People keep saying they weren’t expecting a film about technology to make them emotional. But you have to be emotional to face this technology, to realize what we have that separates us from something that isn’t technology, that isn’t human, to figure out what it is that we value. We need to decide what is important, what makes us, and how we can make that machine.
It’s been interesting to see how there’s a groundswell of AI backlash that’s been bipartisan.
Tremper: What you just described is a thing that will really give me hope. Because at a certain point, no matter how good the promises are that people are making, whether they’re politicians or people in the Big Tech companies, when the realities begin to become so asymmetrically shitty, [people] don’t care about promises anymore. If you promise me the future of education is going to be bright, but all I’m seeing is my kid is using ChatGPT for his homework, and my kid’s teacher is using it to grade that homework, there’s no actual human interaction happening.
There are going to be tens of thousands of benefits and tens of thousands of trespasses against every part of our lives. If there’s one thing people take away from this film, [we hope] it’s that [AI] is not going to be either good or bad, it’s going to be both. The way you navigate this territory is just by looking at your life right now and asking What do I care about? Who do I care about? And how is this technology affecting me? After the Pentagon and Anthropic thing happened, 1.5 million people unsubscribed to ChatGPT.
Five years ago, The Social Dilemma arrived when change felt almost too late. Is this meant to be more proactive?
Tremper: That was actually one of the major challenges of starting the film, because we started it before ChatGPT 3.5 came out. I’m from Seattle; the joke I make is that a lot of this felt like we started making a documentary about Nirvana six months before Nevermind came out. Because it was just like nobody knows this is happening, this thing is coming, and now it’s become more ubiquitous.
Tyrell: Even with the feeling [of being] too late after The Social Dilemma, you get to things when you get to them, and sometimes that means that you can’t lament how late you are when there’s still work that needs to be done, right? So The Social Dilemma was able to activate quite a lot of people and reorient a lot of people’s thinking towards social media, including my own. And I grew up as a digital native. Instagram was a cool thing once upon time. Now it’s just a mall. I can’t stand it. But that awareness of what was actually going on there [helped]. Was it too late? It would have been nice if it was earlier, but it’s not too late.
