California’s state senate gave final approval early on Saturday morning to a major AI safety bill setting new transparency requirements on large companies.
As described by its author, state senator Scott Wiener, SB 53 “requires large AI labs to be transparent about their safety protocols, creates whistleblower protections for [employees] at AI labs & creates a public cloud to expand compute access (CalCompute).”
The bill now goes to California Governor Gavin Newsom to sign or veto. He has not commented publicly on SB 53, but last year, he vetoed a more expansive safety bill also authored by Wiener, while signing narrower legislation targeting issues like deepfakes.
At the time, Newsom acknowledged the importance of “protecting the public from real threats posed by this technology,” but criticized Wiener’s previous bill for applying “stringent standards” to large models regardless of whether they were “deployed in high-risk environments, [involved] critical decision-making or the use of sensitive data.”
Wiener said the new bill was influenced by recommendations from a policy panel of AI experts that Newsom convened after his veto.
Politico also reports that SB 53 was recently amended so that companies developing “frontier” AI models while bringing in less than $500 million in annual revenue will only need to disclose high level safety details, while companies making more than that will need to provide more detailed reports.
The bill has been opposed by a number of Silicon Valley companies, VC firms, and lobbying groups. In a recent letter to Newsom, OpenAI did not mention SB 53 specifically but argued that to avoid “duplication and inconsistencies,” companies should be considered compliant with statewide safety rules as long as they meet federal or European standards.
Techcrunch event
San Francisco
|
October 27-29, 2025
And Andreessen Horowitz’s head of AI policy and chief legal officer recently claimed that ”many of today’s state AI bills — like proposals in California and New York — risk” crossing a line by violating constitutional limits on how states can regulate interstate commerce.
a16z’s co-founders had previously pointed to tech regulation as one of the factors leading them to back Donald Trump’s bid for a second term. The Trump administration and its allies subsequently called for a 10-year ban on state AI regulation.
Anthropic, meanwhile, has come out in favor of SB 53.
“We have long said we would prefer a federal standard,” said Anthropic co-founder Jack Clark in a post. “But in the absence of that this creates a solid blueprint for AI governance that cannot be ignored.”
Add Comment