Taylor Owen
← Back to blog

Michael Pollan Says AI Isn't Conscious – But Plants Might Be

Four years ago, a Google engineer named Blake Lemoine went public with a claim that the large language model he'd been working on had become sentient. Almost no one took him seriously. Most AI experts said he was confusing next token prediction for signs of life, and Google promptly fired him.

But lately, it's started to seem like Lemoine may have been onto something.

When I spoke to Geoffrey Hinton last year, he was pretty confident that AI was already exhibiting signs of sentience. And Dario Amoday, the CEO of Anthropic, has said he can't be certain his chatbot isn't conscious.

So what does that actually mean? A chatbot is clearly intelligent. But does it have a sense of self? And what would happen if it did?

These are questions with no easy answers, partly because scientists and philosophers still don't agree on what consciousness is or where it comes from. And this is the precarious terrain Michael Pollan finds himself exploring in his new book, A World Appears.

Pollan's bestsellers have already reshaped how we think about food, and plants, and psychedelics. Now he's trying to do the same thing for consciousness. That's no easy feat. The book, and this conversation, go to some pretty strange and mind-bending places. Fair warning: you might come away with more questions than answers.

But as Silicon Valley continues to flirt with the idea of artificial consciousness, of machines that don't just think but feel, these are the kinds of questions we should probably start asking.

Listen on: Apple Podcasts | Spotify