Response to OpenAI's Letter and Decibel Interview
It's been a long week. Since the news broke that OpenAI flagged the Tumbler Ridge shooter's conversations and chose not to contact police, I've been doing a lot of media and policy work on what this means for how Canada governs consumer AI. These two pieces reflect where my head is at (more coming on Monday).
First, a follow-up to the policy memo I circulated earlier this week by Helen Hayes and I. Yesterday OpenAI released a letter to Minister Solomon outlining voluntary commitments in response to Tuesday's meeting in Ottawa. The letter is welcome, and OpenAI's willingness to engage directly with the government is notable. But ultimately it reinforces rather than resolves the case for legislation.
The company now admits its previous safety threshold was wrong. It disclosed that the shooter created a second account that its systems failed to detect. And every commitment it made is voluntary, unenforceable, and applies to one company. Canadians also use Gemini, Copilot, Meta AI, Grok. None of those companies were in the room. A regulatory framework is the only way to establish consistent obligations across the market.
Response to OpenAI's Letter to Minister Solomon
Second, I sat down with The Decibel to talk through some of the thornier dimensions of this that don't fit neatly into a memo. The privacy trade-offs involved in monitoring chatbot conversations. Why mandatory reporting to law enforcement can't be the whole answer. What a risk-based regulatory framework would actually look like. And why this government's AI adoption agenda depends on getting consumer safety right.
The core argument in both is the same: online harms and AI consumer safety are one problem, not two. Canada needs an online harms regulatory framework with chatbots in scope. And we need it this session.