California Continues its Push to Regulate AI
October 17, 2025
California continued its press to regulate artificial intelligence this week, with Governor Newsom signing into law a suite of bills that will shape how companies use and develop AI. The newly enacted laws include regulation of AI chatbots, amendments to California’s AI Transparency Act, removal of a certain defense in civil actions against AI developers or users, and new protections against deepfake pornography. Governor Newsom also vetoed a handful of bills, including an employment-related measure.
Newly Enacted Laws
Regulating AI Chatbots
On Monday, Newsom signed SB 243 into law. The law requires anyone who makes available to a California user a “companion chatbot” to notify users that they are interacting with an artificially generated companion and to implement safety protocols designed to minimize mental health risks. “Companion chatbot” is defined as “an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs.” § 22601(b)(1). Notably, the law excludes chatbots used only for customer service or business purposes; video game chatbots that cannot discuss topics unrelated to the video game; and stand-alone consumer devices that function as speaker and voice command interfaces, act as virtual assistants, and do not sustain relationships across multiple interactions. § 22601(b)(2).
Operators of covered companion chatbots must issue notifications warning users that a chatbot is artificially generated when a reasonable person interacting with the bot would be misled to believe they were interacting with a human. § 22602(a). When a chatbot interacts with a user the operator “knows” is a minor, the operator must (1) disclose that the user is interacting with AI; (2) every three hours, issue a notification that reminds the user to take a break and that the bot is not human; and (3) implement “reasonable measures” to prevent the bot from producing sexually explicit visual material or encouraging the minor to engage in sexually explicit conduct. § 22602(c). Operators must also disclose to users that companion chatbots may not be suitable for some minors. § 22604.
SB 243 also prohibits operators from deploying their chatbot unless the operator has in place a protocol for preventing the provision of “suicidal ideation, suicide, or self-harm content” to users. § 22602(b). Protocols must be published on an operator’s website and should include notifications referring users to crisis service providers if a user expresses thoughts of suicide or self-harm. § 22602(b). Starting July 1, 2027, operators must annually report to California’s Office of Suicide Prevention the number of crisis-service-provider-referral notifications issued; their protocols to detect, remove, and respond to users’ suicidal ideation; and their protocols for prohibiting chatbot responses about suicidal ideation or action. § 22603(a).
The law provides a private right of action for users injured by violations of its requirements, allowing users to recover injunctive relief, damages equal to the greater of actual damages or $1,000 per violation, and attorneys’ fees and costs. § 22605.
Amending the AI Transparency Act
Last year, California enacted the AI Transparency Act (CAITA), which requires certain producers of generative AI to help users detect when AI has been used to create or modify content. AB 853 delays the effective date of CAITA from January 1, 2026 to August 2, 2026, and expands the group of entities obligated to help users detect AI.
The existing version of CAITA requires, among other things, that covered providers—defined as producers of publicly accessible generative AI systems that have over one million monthly visitors—make available to users at no cost an AI detection tool that allows the user to assess whether content was created or altered by generative AI. CAITA also requires those providers to include latent disclosures in content generated by their AI systems and to offer users the option to include manifest disclosure in content created or altered by the provider’s AI.
AB 853 adds several new provisions.
First, beginning January 1, 2027, AB 853 prohibits generative AI hosting platforms—websites or applications that make available for download the source code or model weights of a generative AI system—from knowingly making available a generative AI system that does not place the disclosures required by CAITA. § 22757.3.2.
Second, beginning on the same date, AB 853 requires a “large online platform” to detect and maintain provenance data and allow users to inspect that data. “Large online platform” encompasses public-facing social media platforms, file-sharing platforms, mass messaging platforms and search engines that exceed two million monthly users and distribute content to users who did not create or collaborate in creating that content. § 22757.1(h)(1). The definition excludes broadband internet access services and telecommunications services. § 22757.1(h)(2). Such platforms must (1) identify any provenance data that is embedded in or attached to content distributed on the platform; (2) provide users with provenance data disclosing whether content was generated or substantially altered by generative AI; and (3) allow users to inspect system provenance data. § 22757.3.1(a). Platforms are also prohibited from stripping provenance data or digital signatures from content uploaded or distributed on their platforms. § 22757.3.1(b).
Third, AB 853 requires manufacturers of “capture devices,” defined as a device that can record photographs, audio, or visual content (e.g., cameras, mobile phones), to (1) provide users with an option to include a latent disclosure regarding the method and time of content captured by the device, and (2) embed latent disclosures in content captured by the device by default. § 22757.3.3(a). This requirement goes into effect on January 1, 2028, applies only to devices produced for sale in California on or after that date and obligates manufacturers to comply only to the extent technically feasible. § 22757.3.3.
Like CAITA, AB 853 provides that violators of its requirements are liable for civil penalties in the amount of $5,000 per violation, enforceable by the state Attorney General, a city attorney, or a county counsel. § 22757.4.
In his message accompanying the signing of AB 853, Governor Newsom acknowledged that “stakeholders remain concerned that provisions of the bill, while well-intentioned, present implementation challenges that could lead to unintended consequences.” Accordingly, he “encourage[d] the Legislature to enact follow-up legislation in 2026” prior to the law’s effective date “to address these technical feasibility issues.” We therefore expect to see continued regulation in this area before August of next year.
Regulating Defenses in Civil Actions
AB 316 prevents developers and users of generative AI from asserting an autonomous-harm defense in civil actions. The act applies to any defendant who “developed, modified, or used” AI. § 1714.46(b). Where a plaintiff brings a civil action against such a defendant alleging that the defendant’s development or use of AI caused the plaintiff harm, a defendant may not assert as a defense that the AI autonomously caused the harm. § 1714.46(b). In other words, the law ensures that defendants are held legally responsible for the harm caused by their development, modification, or use of AI, unless other defenses—such as that the harm was not in fact caused by their actions—relieve them of liability. § 1714.46(c).
Protecting Against Pornography Deepfakes
AB 621 bolsters existing protections against deepfake pornography. The law expands the definition of material that can give rise to a cause of action to include “any portion of a visual or audiovisual work created or substantially altered through digitization … that shows the depicted individual in the nude or appearing to engage in, or being subjected to, sexual conduct.” § 1708.86(a)(7). It also grants to depicted individuals an additional cause of action against a person who knew, or reasonably should have known, that the individual was a minor when the material was created, § 1708.86(b)(1), and provides a cause of action against an entity that knowingly facilitates or recklessly aids and abets prohibited conduct, § 1708.86(b)(3). AB 621 also increases the statutory damages available to depicted individuals from $30,000 to $50,000 for non-malicious violations, and from $150,000 to $250,000 for malicious violations of the statute.
Vetoed Legislation
In addition to signing into law the bills discussed above, Governor Newsom also vetoed several measures, including SB 7 and AB 1067.
SB 7 would have restricted employers’ use of “automated decision systems” (ADS) in the employment context. It would have required employers to provide written notice of the use of an ADS before its use and prohibited employers from using an ADS in certain ways, including to infer a worker’s protected status or as the sole basis for a disciplinary decision. Governor Newsom criticized the bill for “impos[ing] unfocused notification requirements on any business using even the most innocuous tools” and “propos[ing] overly broad restrictions on how employers may use ADS tools.”
AB 1064 would have prohibited companion chatbots from being made available to children if they are foreseeably capable of certain potentially harmful activities, including encouraging self-harm, consumption of drugs or alcohol, or disordered eating; offering unsupervised mental health therapy; encouraging illegal activity; engaging in erotic or sexually explicit interactions; prioritizing validation of a user’s beliefs over factual accuracy or safety; and optimizing engagement over safety guardrails. In his veto message, Newsom acknowledged his support for establishing safeguards for minors’ use of AI—including those in SB 243, which he signed the same day. But he expressed concern that AB 1064 “imposes such broad restrictions” that it might “unintentionally lead to a total ban on the use of these products by minors,” which could hinder their preparedness for an AI-shaped world. Nonetheless, he articulated his commitment to working with legislators to develop a bill next year to ensure proper safeguards for minors’ use of AI.
Implications for Companies Developing and Using AI
These laws present a host of new compliance obligations for developers and users of AI.
- Chatbot operators will need to assess whether they fall within the scope of SB 243 or are encompassed by one of its exemptions. Operators that fall within the bill’s scope should start developing the mental-health safety protocols the bill requires and ensuring they are prepared to issue the mandated notifications to AI users, including those targeted to minors.
- AB 853 delays but does not alter CAITA’s primary burdens for developers of generative AI. But it imposes new obligations on a range of additional companies that create and host content that might be altered using AI, including social media companies, file-sharing platforms, and recording device manufacturers. Companies newly swept into CAITA’s reach should begin assessing how they will comply with the law’s provenance-data and disclosure requirements, even as new legislation addressing the technical feasibility of compliance is likely forthcoming.
- Finally, developers and users of AI should be aware of new forms of liability, including potential allegations of aiding and abetting deepfake pornography, and should consider the impact of AB 316’s restriction on their available defenses in civil actions.
California is likely to continue its efforts to regulate AI, and we expect additional bills when the legislative session resumes next year.
O’Melveny will monitor further developments related to state AI regulation. Please contact the attorneys listed on this Client Alert or your O’Melveny counsel if you have any questions.
This memorandum is a summary for general information and discussion only and may be considered an advertisement for certain purposes. It is not a full analysis of the matters presented, may not be relied upon as legal advice, and does not purport to represent the views of our clients or the Firm. Jonathan P. Schneller, an O’Melveny partner licensed to practice law in California; Reema Shah, an O’Melveny partner licensed to practice law in New York; Amy R. Lucas, an O’Melveny partner licensed to practice law in California; Cassandra Seto, an O’Melveny partner licensed to practice law in California; Daniel R. Suvor, an O’Melveny partner licensed to practice law in California; Kelsey G. Fraser, an O’Melveny associate licensed to practice law in the District of Columbia, contributed to the content of this newsletter. The views expressed in this newsletter are the views of the authors except as otherwise noted.
© 2025 O’Melveny & Myers LLP. All Rights Reserved. Portions of this communication may contain attorney advertising. Prior results do not guarantee a similar outcome. Please direct all inquiries regarding New York’s Rules of Professional Conduct to O’Melveny & Myers LLP, 1301 Avenue of the Americas, Suite 1700, New York, NY, 10019, T: +1 212 326 2000.