Why Did We Stop Talking About The AI Apocalypse?
I have a confession: I put off doing this interview for a while.
Not because I thought it would be boring. Because the weight of it felt overwhelming.
A couple of years ago, existential risk from AI was the dominant conversation. Open letters demanding a pause on development. Geoffrey Hinton and Yoshua Bengio warning anyone who would listen that AI could spell the end of human kind. Legislation being drafted specifically to address the threat.
And then we moved on. We started using AI for work, for school, for planning our kids' birthday parties. When safety came up, it was about something more specific: deepfakes, job loss, AI psychosis. Collectively, we just kind of stopped talking about the end of the world.
Nate Soares didn't move on. Last year, he and Eliezer Yudkowsky published If Anyone Builds It, Everyone Dies. The title tells you what you need to know about the subtlety of the argument. The book is unequivocal: if we keep going down the path we're on, it will lead to our collective extinction.
A lot of people will dismiss that claim. I think that's a mistake. You don't have to agree with Soares to take his argument seriously, and in this conversation on Machines Like Us, I tried to push him on the hardest questions: why can't we just program AI not to hurt us? Are these systems grown rather than crafted, and what does that mean for control? Is this a problem with all AI, or just the way we're building it now?
Whether you come away persuaded or not, this is a conversation we need to be having. If there's a chance he's right, we need to hear him out.
Listen on: Apple Podcasts | Spotify