Member-only story
Arthur Mensch Just Broke Silicon Valley’s AI Safety Cartel
The safety consensus just cracked in public
--
This article is fully accessible and free to read on Substack
Silicon Valley loves a safe environment for big statements. A podcast studio works nicely. A venture conference works even better. The audience is friendly, the hosts are paid, the applause arrives on cue, and the doom gets edited into a highlight reel.
A government stage is different. The room has ministers. The room has regulators. The room has people whose job involves writing rules that become someone else’s paperwork for the next decade. So when Arthur Mensch, CEO of Mistral, looked at the fashionable “extreme AI risk” rhetoric and called it “distraction tactics”, he did it in the one venue where that phrase stops being discourse and starts being a prompt for procurement decisions (read more here).
“AI safety” has become more than a set of concerns. It has turned into a sorting mechanism. If you can persuade governments that civilisation hangs on frontier model containment, governments will pick a handful of labs as default custodians. They will build policy around them, buy their evaluations, rent their compute, and treat everyone else like unlicensed hobbyists.