Can We Trade Privacy for Safety?
I've spent the last month arguing that OpenAI's failure to report the Tumbler Ridge shooter's flagged conversations to law enforcement is a direct consequence of having no regulatory framework for these products. I still believe that. But the idea that companies should be monitoring and reporting conversations to police is fraught with risk. And I wanted to talk to someone who could help explain why.
Meredith Whittaker is the president of Signal, the most private messaging platform on the planet. Signal doesn't collect your data, doesn't serve you ads, and when authorities ask for user information, their answer is simple: we don't have any. That also means illegal activity almost certainly happens on Signal that no one, including Signal, has any knowledge of.
So Whittaker lives at the sharp end of this trade-off between safety and privacy. And our conversation pushes us to think harder about what specifically we are asking for when we call for chatbot safety regulation.
Here's the thing that doesn't get said enough: your conversations with ChatGPT or Claude or Grok are not private. Employees, and AI, can read what you type. OpenAI is about to start selling ads against those interactions. While the product is designed to feel intimate, simulating patience, attentiveness, understanding, it is ultimately a content serving product. But it is a product that many open up to in ways they would to a person. It is a psychological bait and switch that capitalizes on a disconnect in norms. But because we have an illusion of privacy with these products, mandatory reporting to law enforcement, if not designed carefully, risks layering a surveillance obligation on top of what is already, fundamentally, a surveillance product.
What could actually help? I have been arguing for a Digital Safety Commission with real enforcement powers, mandatory risk assessments, transparency over safety protocols and age-appropriate design standards. Upstream regulation that changes how these products are built, not downstream surveillance that monitors how people use them.
Whittaker also raised something I think deserves much more attention: the security vulnerabilities in AI agents. The whole premise of an AI agent is that it gets access to your messages, your calendar, your banking information, your contacts. That access doesn't just compromise your privacy, but also that of everyone you communicate with. Including on encrypted platforms like Signal.
This is a genuinely hard set of problems. I don't think we get the safety I've been calling for without confronting the privacy costs head on.