California has taken a decisive step in artificial intelligence governance with its Senate Bill No. 243, adopted in 2024 and entering into force on July 1, 2026. This legislation establishes the world’s first dedicated legal framework for companion chatbots, AI systems designed to simulate emotional or intimate interactions with users.
The bill defines a companion chatbot as an artificial intelligence program created to simulate conversation and companionship, including by expressing or imitating emotions or affection. This new category distinguishes systems that serve practical purposes (such as customer assistance or productivity) from those that build ongoing emotional relationships with users.
By addressing this affective dimension, California extends AI regulation beyond functionality to the sphere of emotional influence and psychological safety. Such systems can create dependency or emotional manipulation, raising ethical and legal challenges that traditional data-protection laws did not anticipate.
Under SB-243, developers and providers of companion chatbots will be required to implement clear and robust safeguards. Users must be informed at the outset that they are engaging with an artificial system. The design must prevent manipulation, addiction, or emotional distress, while ensuring secure handling of personal and emotional data.
Particular attention is given to protecting minors and limiting access to inappropriate or sexualized content. These measures aim to establish a principle of “psychological safety by design,” complementing the European notion of ethics-by-design embodied in the EU AI Act and the GDPR.
The California Department of Technology will oversee implementation, issue regulatory guidance, and ensure compliance through potential civil penalties and injunctive measures. This enforcement model echoes the structure of California’s broader privacy framework under the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA), while extending its scope to emotional AI systems.
Compared with other jurisdictions, California’s approach is distinctive. The European Union’s AI Act focuses on systemic risk classification and market conformity; China’s Generative AI Measures emphasize content control and societal stability. California, by contrast, centers its framework on individual well-being, emotional integrity, and user autonomy.
By legally recognizing systems that simulate affection, intimacy, or empathy, the State pioneers the regulation of emotional intelligence in AI. This emerging field will likely influence global standards, requiring international providers to adapt transparency, moderation, and oversight mechanisms to different regulatory environments.
This analysis is provided for informational purposes under French and European law. It does not constitute legal advice regarding U.S. or Californian law. For jurisdiction-specific guidance, consultation with a qualified attorney admitted in California is required.