California enacts first state-level AI safety law
SB 53 introduces new oversight for the AI industry in California, obliging major labs to reveal safety measures and disclose incidents such as cyberattacks.

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.
The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.
Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.
Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.
Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.