Client Insight: California SB 243: New Compliance Requirements for Operators of AI “Companion Chatbots”
On October 13, 2025, California Governor Newsom signed Senate Bill 243 on Companion Chatbots (“SB 243”) into law. SB 243 is the first U.S. state law that requires operators of “companion chatbot” platforms to meet design, disclosure, safety and reporting requirements, including obligations to restrict harmful content and provide disclosures intended to address online safety.
Key Takeaways
- 
- Phased Compliance Dates:  
- Effective January 1, 2026, SB 243 imposes disclosure and transparency obligations, mandatory safety protocols, and reporting requirements on operators of companion chatbot platforms.
- Effective July 1, 2027, operators of companion chatbot platforms become subject to annual reporting requirements.
 
- Private Right of Action: SB 243 creates a private right of action for any person who suffers “injury in fact” caused by a violation of the law. Plaintiffs may bring a civil action to claim injunctive relief, damages equal to the greater of actual damages or $1,000 per violation, and reasonable attorneys’ fees and costs. SB 243 further clarifies that the duties imposed on operators under this law are “cumulative” and do not relieve companion chatbot operators from their compliance obligations under other applicable laws.
- California Regulatory Trends: This bill is part of a broader set of privacy regulation and online safety laws recently passed in California, which go into effect in 2026 and 2027. For more information, please see California Raises the Bar: Groundbreaking Privacy Laws Bring Universal Opt-Out, Data Broker Transparency, and Health Data Protections.
 
- Phased Compliance Dates:  
Scope
Who is Covered (“Operator”)?
SB 243 is the first U.S. state law that applies to an “operator,” which is defined as “a person who makes a companion chatbot platform available to a user in California.” Note that the compliance obligations of an “operator” fall on the individual or entity[1] offering the companion chatbot to a California-based consumer, even if that chatbot is built largely on top of a third-party vendor’s AI technology (e.g., ChatGPT, Gemini, Claude, etc.). In other words, if a company releases a companion chatbot built on top of a third-party AI model, the company itself is responsible for meeting SB 243 requirements and cannot delegate its compliance obligations to the third-party AI provider.
What is Covered (“Companion Chatbot”)?
SB 243 defines a “companion chatbot platform” as a platform that allows users to engage with companion chatbots. Such “companion chatbots” are defined as an AI system with a natural language interface that (i) provides adaptive, human-like responses to user inputs, and (ii) is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.
What Types of Chatbots Are Not Covered?
The law expressly excludes the following types of chatbots from the definition of “companion chatbot”:
- 
- A bot used only for customer service, business operational purposes, productivity tools, internal research or technical assistance.
- A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game; and
- A standalone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user.
 
What Types of Chatbots Are Covered?
SB 243 covers a broad range of AI chatbots, but the key factor is assessing whether a chatbot tool is “capable of meeting a user’s social needs.” Examples include:
- 
- Customer Service: Chatbots that can recall prior user conversations or interactions (i.e. continuing a chat session), remember user preferences, provide recommendations tailored to a specific user, adapt its tone specific to a user or engage in human-like dialogue with a user will be regulated under SB 243. This may include e-commerce chatbot communications that extend beyond customer service, such as regularly checking in or attempting to build a relationship with users.
- Virtual Assistants: Virtual assistant chatbots that go beyond productivity support and build rapport with a user across multiple sessions (e.g., share reminders, organize and check in with the user on daily tasks, etc.).
- Wellness Coaching: Chatbot-based coaches that offer user wellness support, encouragement, and progress tracking. This could include healthcare apps that track symptoms or check in on fitness goals, financial service apps that provide money management support or monitor progress toward savings goals.
- Academic Support: Academic support chatbots that follow up with students over time, provide exam prep and study assistance services, and offer ongoing motivational reinforcement across learning development.
- Relationship Companions: Chatbots that enable users to interact with persistent AI avatars designed for emotional or social companionship by offering both romantic and platonic support, advice, and ongoing engagement.
 
Key Requirements
SB 243 imposes disclosure and transparency obligations, harm prevention and safety procedures, and reporting requirements on operators of companion chatbot platforms, with specific additional requirements for users that are minors under eighteen (18) years old.
AI Disclosures that Must Be Embedded in the Product:
- 
- AI Disclosure (All Users): If a reasonable person interacting with the chatbot could be misled into thinking that they are interacting with a human, the operator must provide a “clear and conspicuous notification” that it is artificially generated and not human.[2]
- Mandatory Suitability Warning (All Users): The operator must clearly and conspicuously disclose to the user “on the application, the browser, or any other format that a user can use to access the chatbot” that the companion chatbot may not be suitable for some minors. Disclosure can be accomplished with an in-app banner, popup, or some other prominent notification, but must not be buried in the company’s standard terms or policies.
- Additional Recurring Notifications (Minor Users Only): For users that the operator knows are minors, the operator must additionally provide a clear and conspicuous notification at least every three hours during a continuous interaction to remind the user to take a break and that the chatbot is AI-generated.
- This requirement applies to each individual minor user, so companies must implement these safeguards on a user-by-user basis.
- “Knowledge” refers to whether a company is aware, or reasonably should be aware, that a particular user is under 18 years of age. Companies that make chatbots available to minors should implement age verification and parental consent processes during user account registration or require proof of legal age where appropriate, such as prior to users accessing companion chatbot services.
 
 
Required Harm Prevention Protocols and Disclosures:
Under the law, companion chatbots may not engage with users unless the operator implements and maintains safety protocols to prevent the production of suicidal ideation, suicide or self-harm content. Operators are required to meet the following requirements:
- 
- Harm Prevention Protocols: Operators are required to institute and publish details of such harm prevention measures and safety protocols on their website, including notifications that refer an at-risk user to appropriate crisis service providers (e.g., suicide hotlines, crisis text lines, etc.) when the user expresses suicidal ideation, suicide or self-harm.
- Sexually Explicit Content Restrictions (Minor Users Only): Operators must institute reasonable measures to prevent chatbots from (i) producing sexually explicit visual material or (ii) directly stating that the minor should engage in sexually explicit conduct.
 
Annual Reporting Requirements (starting July 1, 2027):
Starting July 1, 2027, operators must submit an annual report to the California Department of Public Health’s Office of Suicide Prevention detailing the following:
- 
- The number of times the operator issued a crisis-service-provider referral during the preceding calendar year;
- Protocols put in place to detect, remove and respond to instances of suicidal ideation by users; and
- Protocols in place to prohibit a companion chatbot response about suicidal ideation or actions with the user.
 
Operators must measure suicidal ideation using evidence-based methods, but are prohibited from reporting any user identifiers or personal information.
How can GD help?
If you have any questions regarding this client alert, or your company needs assistance with this topic, including practical guidance and hands-on support for conducting a readiness assessment or preparing an implementation roadmap tailored to your chatbot/AI-interaction systems, please reach out to your Gunderson Dettmer attorney or contact any member of our Strategic Transactions & Licensing Group.
[1] While SB 243 does not redefine the term “person,” California typically interprets the term broadly to include both individuals and legal entities. As a result, an “operator” may be either a company or an individual making a companion chatbot available in California, though in practice this will mostly be the entity. Note, however, plaintiffs may pair SB 243 claims with the state’s Unfair Competition Law, which can expose officers, directors, or employees to personal liability if they personally participate in, authorize, or direct conduct giving rise to a violation of SB 243.
[2] This AI disclosure requirement is in addition to California’s existing SB 1001 Bot Disclosure Law (“SB 1001”), which requires companies to clearly disclose when a user is interacting with a bot used to “knowingly deceive” a person for the purpose of incentivizing a commercial sale or influencing a vote in an election. As defined under SB 1001, a “bot” is an “automated online account where all or substantially all of the actions or posts of that account are not the result of a person.”
Featured Insights
Featured Insights
Client News
Client News
