O’Melveny Worldwide

California Enacts First-of-its-Kind AI Safety Regulation

October 2, 2025

On September 29, 2025, California Governor Gavin Newsom signed into law a first-of-its-kind regulation that imposes new safety and disclosure requirements on developers of the most advanced artificial intelligence (AI) models. The Transparency in Frontier Artificial Intelligence Act (TFAIA), also known as Senate Bill 53, is the product of California’s second attempt to comprehensively regulate the largest AI companies, after Governor Newsom vetoed a different safety-focused bill last year.1 The Act reflects the findings of a report on regulation of frontier AI technology that was called for by Governor Newsom and published in June of this year.

Requirements Under the TFAIA

TFAIA seeks to reduce the risk of catastrophic events resulting from AI by requiring developers of the largest and most powerful AI models to implement and publish safety protocols and procedures. The Act applies specifically to developers of “frontier models,” defined as “foundation model[s] trained using a quantity of computer power greater than 10^26 integer or floating-point operations.” § 22747.11(h)-(i). For the subset of those developers collecting more than $500 million in yearly gross revenues—“large frontier developers”—the Act imposes a requirement to develop and publish on their websites a framework for mitigating catastrophic risks resulting from the use of their technology. § 22757.12.

“Catastrophic risk” encompasses a narrow set of “foreseeable and material risk[s]” that a frontier model will “materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars” in property damage resulting from a “single incident.” § 22757.11(c). Such incidents only qualify as catastrophic risks when they involve the frontier model assisting in the creation or release of a chemical, biological, radiological, or nuclear weapon; engaging in conduct without meaningful human oversight that is either a cyberattack or would constitute murder, assault, extortion, or theft if committed by a human; or evading the control of the AI’s frontier developer or user. Id. The definition explicitly exempts risks from information a frontier model outputs if that information is otherwise publicly accessible in a substantially similar form from a non-foundation-model source; lawful activity of the federal government; and harm caused by a frontier model in combination with other software if the frontier model itself did not materially contribute to the harm. § 22747.11(c)(2).

Among other things, large frontier developers’ frameworks must address how the developer incorporates national and international standards and best practices into its framework; assesses its model’s capability to pose a catastrophic risk; applies mitigation strategies to address that risk potential; uses third parties to assess the potential for risks and effectiveness of mitigation strategies; and institutes internal governance practices to ensure the framework’s policies are implemented.

§ 22757.12(a). Large frontier developers must review these frameworks annually and publish updates resulting from these reviews. § 22757.12(b).

All frontier developers are also obligated to publish a transparency report providing, among other things, a mechanism for communicating with the developer, information about the model’s capabilities and intended or restricted uses each time they deploy a new or substantially modified version of a frontier model. § 22757.12(c)(1). Large frontier developers must include a summary of their catastrophic-risk assessment in this transparency report. § 22757.12(c)(2).

In addition to these public disclosures, large frontier developers must report their assessment of catastrophic risk resulting from internal use of their models to California’s Office of Emergency Services (OES) on a recurring basis. § 22757.12(d). Failure to comply with the Act’s requirements can subject large frontier developers to civil penalties up to $1 million per violation, enforceable by the California Attorney General. See § 22757.15.

TFAIA also requires California state agencies to develop procedures to facilitate reporting of critical safety incidents and catastrophic-risk assessments, and to ensure that the law’s requirements stay current in a changing technological landscape. It directs OES to establish a mechanism for AI developers and members of the public to report “critical safety incident[s],” defined as unauthorized access to, modification of, or exfiltration of a frontier model’s model weights that results in death or bodily injury; harm resulting from materialization of a catastrophic risk; loss of control of a frontier model that results in death or bodily injury; or a frontier model that deceives the frontier developer to subvert the controls or monitoring of its frontier developer. § 22757.11(d). Frontier model developers must report such incidents to OES within 15 days. § 22757.13. Beginning in January 2027, OES will submit to the legislature and the Governor an anonymized report of critical safety incidents each year. § 22757.13(g).

Additionally, the Act tasks the Department of Technology with conducting an annual review to determine whether the definitions of the terms that define the Act’s scope—namely, “frontier model,” “frontier developer,” and “large frontier developer”—should be updated to reflect technological developments or changing national and international standards. § 22757.14.

TFAIA complements state reporting mechanisms with protections for employee whistleblowers, prohibiting all frontier developers from preventing or retaliating against an employee who discloses a frontier developer’s activities that pose significant public health or safety threats resulting from catastrophic risk, as well as violations of TFAIA’s requirements. § 1107.1(a). Large frontier developers must also provide internal processes through which employees can anonymously disclose information about perceived catastrophic risks or their employer’s failure to comply with TFAIA’s disclosure requirements. § 1107.1(e).

Finally, the Act establishes a consortium within the Government Operations Agency that is tasked with developing a framework for the creation of a public cloud computing cluster, “CalCompute.” § 11546.8.

TFAIA preempts any city, county, municipality, or local agency regulation of frontier developers’ management of catastrophic risk. Sec. 5(f).

Implications for State AI Legislation

California’s new law adopts a regulatory approach similar to that of a recent New York bill, the Responsible AI Safety and Education (RAISE) Act, which has passed the legislature and awaits Governor Hochul’s signature. Both bills regulate only the leading AI developers—i.e., large developers of frontier models—and attempt to mitigate the risks of catastrophic harm through the development and disclosure of safety protocols.

This approach contrasts with the one taken in Colorado, the only other state to have enacted comprehensive AI regulation. Rather than focusing on frontier models and catastrophic risk, Colorado’s Artificial Intelligence Act (CAIA) requires covered entities to use “reasonable care” to protect consumers from the risk of algorithmic discrimination when AI is used in making consequential decisions, such as those concerning employment opportunities, access to financial services, or determinations about housing. CAIA applies to both developers and deployers of a “high-risk artificial intelligence system”—i.e., a system that “makes, or is a substantial factor in making, a consequential decision.” Its focus on high-risk AI more closely tracks the approach taken by the European Union’s AI Act. CAIA is currently set to go into effect on June 30, 2026, but debates about potential amendments to the legislation are ongoing and could result in further delay.2

All three state bills are departures from the majority approach in the U.S. Most states have focused on regulating the use of AI in specific sectors or use cases (e.g., healthcare, employment, elections, or commercial use of personal likeness via “deep fakes”), rather than pursuing more comprehensive laws governing AI. According to the National Conference of State Legislatures, all 50 states and the District of Columbia introduced some form of AI legislation in the 2025 legislative session.3 Bills have targeted a wide range of AI applications, including the use of deepfakes in elections, see Alaska’s SB2; the potential use of AI in rent fixing, see North Carolina’s H970; and how health insurers employ AI to determine coverage, see Rhode Island’s S13 and H5172. Several sector-specific bills have recently taken effect or are set to do so soon, including bills regulating the use of AI in employment, see Illinois’s HB 3773; the use of AI technology to create digital replicas of deceased individuals, see California’s AB 1836; safeguards for AI companions, see New York’s Assembly Bill A3008; and the use of AI in providing mental health services, see Utah’s HB 452.

This proliferation of state law reflects the void left by federal legislative inaction on AI, including this summer’s rejection by the Senate of a proposed 10-year federal moratorium on state AI regulation.4 In July, the Trump Administration released anAI Action Plan that sought to promote AI innovation by, among other things, limiting federal funding to states whose “AI regulatory regimes may hinder the effectiveness” of funding awards. That Plan does not appear to have hindered state regulatory efforts. Absent federal legislative action, AI developers are likely to face a patchwork of state regulations—ranging from sector-specific laws to comprehensive regulations like California’s TFAIA—for the foreseeable future.

O’Melveny will monitor further developments related to state AI regulation. Please contact the attorneys listed on this Client Alert or your O’Melveny counsel if you have any questions.


1 Letter from the Office of the Governor (Sept. 29, 2024), https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf.

2 John Frank, Big Tech wins in delay of Colorado’s AI transparency bill, Axios Denver (Aug. 26, 2025), https://www.axios.com/local/denver/2025/08/26/big-tech-ai-colorado-law.

3 See National Conference of State Legislatures, Artificial Intelligence 2025 Legislation (July 10, 2025), https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation.

4 David Morgan and David Shepardson, US Senate strikes AI regulation ban from Trump megabill, Reuters (July 1, 2025), https://www.reuters.com/legal/government/us-senate-strikes-ai-regulation-ban-trump-megabill-2025-07-01/.


This memorandum is a summary for general information and discussion only and may be considered an advertisement for certain purposes. It is not a full analysis of the matters presented, may not be relied upon as legal advice, and does not purport to represent the views of our clients or the Firm. Jonathan P. Schneller, an O’Melveny partner licensed to practice law in California; Reema Shah, an O’Melveny partner licensed to practice law in New York; Amy R. Lucas, an O’Melveny partner licensed to practice law in California; Cassandra Seto, an O’Melveny partner licensed to practice law in California; Daniel R. Suvor, an O’Melveny partner licensed to practice law in California; Kelsey G. Fraser, an O’Melveny associate licensed to practice law in the District of Columbia, contributed to the content of this newsletter. The views expressed in this newsletter are the views of the authors except as otherwise noted.


© 2025 O’Melveny & Myers LLP. All Rights Reserved. Portions of this communication may contain attorney advertising. Prior results do not guarantee a similar outcome. Please direct all inquiries regarding New York’s Rules of Professional Conduct to O’Melveny & Myers LLP, 1301 Avenue of the Americas, Suite 1700, New York, NY, 10019, T: +1 212 326 2000.