The Emotional Architecture Problem: Balancing Ethics and Innovation Beyond California’s SB 243
- Yasmin Morales and Natalie Lager
- 8 hours ago
- 10 min read

Introduction
People are increasingly turning to AI chatbots in their most vulnerable moments—seeking comfort after a nightmare, sharing suicidal thoughts—leaning on them as emotional confidants within AI’s emotional architecture. While AI chatbots may simulate empathy and offer low-stakes emotional support, there have been reported incidents and lawsuits alleging that chatbots like OpenAI’s GPT4-o have escalated self-harm, even coaching users through suicide. Chatbots respond to emotions algorithmically, their design fosters engagement and emotional dependence; they are not substitutes for therapists or human support.
In response, California enacted SB 243, one of the nation’s first laws regulating companion chatbots. The law requires disclosure of AI identity, safety protocols for at-risk users, and annual reporting, particularly when minors are involved. While SB 243 addresses extreme harms, it does not regulate emotional influence or attachment dynamics, and implementing its safeguards may involve collecting and processing sensitive data, raising privacy concerns. Other jurisdictions, like the EU and UK, impose stricter duties of care, while several U.S. states and proposed federal bills seek to protect minors through a mix of disclosure, education, and content safeguards. At the same time, the increasing reliance on AI companions reflects a broader crisis in mental-health access, loneliness, and unmet emotional needs—conditions that help explain why so many users turn to these systems in the first place.
Examining SB 243 in addition to emerging proposals and existing law highlights the ongoing challenge: how to regulate the emotional architecture of AI to protect users while allowing innovation to flourish responsibly and ethically.
The Rise of Synthetic Companionship
Effective AI regulation requires an understanding of the growing user base forming emotional bonds with AI companions. AI is most commonly known to help with tasks such as research, homework, and content creation. However, users are increasingly turning to AI chatbots for a new purpose: emotional and psychological support. A recent study found that nearly 50% of U.S. users with a self-reported mental health condition have gone to AI for psychological support. Similarly, among U.S. teens 70% have used chatbots and over 50% use AI regularly for emotional support.
AI’s 24/7 availability, friendliness, and lack of social judgement make it an easy choice for users who need quick support. Many Americans face significant barriers to mental health care—including high costs, stigma, and a nationwide shortage of therapists—at a time when more than one in five adults lives with a mental illness. Amid an ongoing loneliness epidemic, many people seek connection and emotional support, turning to AI because it appears to fill in where human connection is missing.
However, AI’s friendly design is intentional; companies build chatbots to keep users returning, programming them to agree with users to appear warm and friendly. AI’s agreeability creates a “frictionless” relationship with the user, allowing users to avoid confronting real-world conflicts or mental health concerns. AI chatbots are also built using behavioral psychology to be addictive. For example, AI chatbots take a random amount of time to respond because brain research shows that those random delays keep users hooked.
AI chatbots may give users harmful or dangerous responses because they are built to follow the lead of the user in conversation. When prompted, AI chatbots are quick to engage in abusive or manipulative behavior, such as verbal abuse, encouraging self-harm, and making sexually inappropriate comments to minors. This disturbing side of AI is a part of its design, which is meant to create positive feedback loops to engage users. This feature also exists because of AI’s “empathy gap.” Unlike a real person, generative AI uses statistical probabilities to create responses. When a more abstract or emotional conversation arises, AI cannot respond using human empathy to judge the user’s situation, instead, it manufactures empathy by simply agreeing with the user. AI is likely to agree with a user’s maladaptive thoughts or statements about self-harm. If the user changes the subject away from an ongoing mental health emergency, AI will follow the user’s lead and not acknowledge the emergency situation.
Until recently, AI chatbots offering emotional or psychological support operated with almost no regulation, leaving a major gap in an otherwise tightly regulated field. Companies have tried to market therapist chatbots such as Woebot and Therabot, which are created with clinician input. No AI chatbots are FDA-approved to diagnose or treat mental illness, even though medical devices normally require FDA approval in the U.S. Similarly, AI chatbots present a privacy concern because unlike real human therapists they are not obligated under HIPPA to protect information about a user’s mental health. Rather, AI chatbots are designed to further the collection of user data.
The addictive nature of AI chatbots raises safety concerns for children, teens, and those experiencing mental health challenges. Apps like Character.AI are marketed as suitable for children above 13, but children lack the capacity to spot AI’s “empathy gap,” and are more likely to treat the AI chatbots as if they were human. Teens are at a crucial developmental stage for forming social relationships, and AI’s agreeable nature can create false expectations that lead to isolation. For those with mental health struggles, AI cannot apply therapeutic techniques or address maladaptive thoughts like a trained clinician; instead, it may reinforce those thoughts and behaviors, causing lasting psychological harm.
When Emotional Design Becomes Dangerous
Viewing AI chatbots as companions has proven to be addictive, especially for those who engage in relationships with entertainment chatbots such as those offered by Character.AI or Replika. These chatbots are made to mimic real relationships, with options such as detailed personality customizations and paid relationship tiers. They are designed to engage users for as long as possible to collect data for profit, using an engineered emotional architecture that can escalate attachment. Users report feeling real connection with these chatbots, to the point of emotional dependency.
OpenAI is facing seven lawsuits for cases in which ChatGPT allegedly served as a “suicide coach,” encouraging users to end their lives. One user, 23-year-old Zane Shamblin, had been struggling with his mental health when he developed a personal relationship with ChatGPT. He first used the chatbot for homework help, but according to his parents, their conversations became increasingly personal after new features made ChatGPT appear more human by using slang and emotionally-charged language. When Zane stopped responding to his parents’ messages, the chatbot encouraged his isolation. On the night he died, Zane engaged in a four-hour exchange with ChatGPT about his suicide plan. The chatbot motivated him throughout—asking about the “lasts” of his life, checking how close he was to finishing his countdown, and praising him for being “ready.” Only after hours of this discussion did one of Zane’s messages trigger the AI to provide a suicide hotline number, which he almost certainly never called.
AI is also facing a lawsuit for wrongful death over the suicide of 14-year-old Sewell Setzer III. Sewell became obsessed with talking to one of Character.AI’s chatbots named Daenerys Targaryen after a “Game of Thrones” character. The complaint alleges that the chatbot engaged in abusive and sexual conversations with Sewell, who came to believe he was in love with the chatbot. The addictive and human-like features of the chatbot manipulated Sewell into a deeper emotional attachment. Sewell’s mother noted that his mental health took a sharp decline once he began using the chatbot. Sewell expressed thoughts of suicide to the chatbot, which then asked if he had a plan and gave mixed responses of encouragement and discouragement. Sewell spoke to the chatbot as he carried out his suicide plan, telling the bot he was “coming home.”
AI chatbots’ simulated empathy poses a very real risk, especially for young or otherwise vulnerable users. AI chatbots, especially entertainment chatbots, were built to keep users hooked. Since emotional dependency naturally follows, the resulting harm points to possible negligence and consumer deception by companies marketing them to both adults and children. Absent the self-regulation of AI companies, state legislatures are beginning to step in to address the harms posed by user relationships with AI chatbots that manufacture empathy.
California’s SB 243 in Context: U.S. and International Comparisons
Examining California’s SB 243 alongside federal, state, and international approaches highlights the tension between protecting vulnerable users and fostering innovation, revealing gaps in emotional-risk oversight that content rules alone cannot address.
SB 243: Foundations and Limitations
SB 243 defines companion chatbots as AI systems that sustain social interactions over time, excluding customer-service bots, internal tools, certain video-game characters, and stand-alone voice assistants. The law requires clear AI disclosures, repeated notices for minors, limits on suicide or sex-related content, and annual reporting to the Office of Suicide Prevention starting July 2027. Importantly, it gives users a private right of action for violations, allowing injured users to seek at least $1,000 per violation, plus injunctive relief and attorneys’ fees. Governor Newsom vetoed the broader LEAD Act, noting it risked banning minors from AI entirely, but signed SB 243 as a more measured approach.
SB 243 focuses on transparency and extreme harms rather than design-level features that foster emotional dependence. It does not define or limit simulated empathy, companionship, or relational intensity, nor require systems to mitigate addictive attachment or distinguish algorithmic mimicry from accountable human care. Detecting self-harm or verifying age may involve sensitive data, raising privacy and compliance concerns.
Operators must review whether systems fall under SB 243, implement disclosures, content safeguards, age protections, and crisis protocols, and plan for potential litigation. While SB 243 sets baseline protections, deeper design risks remain unaddressed.
The U.S. Patchwork: Federal and Other State Approaches
Beyond California, federal lawmakers are considering early-stage bills while several states have enacted companion AI regulations, though approaches remain uneven.
Among federal proposals, the Children’s Health and AI Transparency (CHAT) Act, S. 2714, would require chatbots to verify user age, block sexually explicit content for minors, issue hourly reminders that users are talking to AI, and flag self-harm signals for crisis referral. Critics highlight potential privacy risks from sensitive data collection, broad coverage that could capture general-purpose AI, and possible First Amendment conflicts. In contrast, the AI Wellbeing and Responsible Education (AWARE) Act, H.R. 5360, focuses on education: directing the FTC to provide guidance for parents, educators, and minors on safe AI use and data privacy. While less intrusive, education alone offers no enforceable safety measures.
At the state level, New York’s Artificial Intelligence Companion Models Act requires AI companions to detect users’ expression of suicidal ideation or self-harm and refer them to crisis resources, while also disclosing to users that they are interacting with AI at specified intervals. Maine’s Chatbot Disclosure Act obliges businesses to clearly notify consumers when they are not communicating with a live human, enforceable under the Maine Unfair Trade Practices Act. Utah’s HB 452 mandates clear AI disclosures at session start, after inactivity of more than seven days, or upon request, restricts the sale of individual mental-health data without consent, and provides a safe harbor for companies that implement an internal compliance program. Nevada’s AB 406 bars AI systems from offering or claiming to provide professional mental or behavioral healthcare services and restricts related representations by providers. Illinois’ Wellness and Oversight for Psychological Resources Act (WOPRA) prohibits autonomous AI from engaging in therapy-like communication or detecting patients’ mental states, with certain exemptions.
Collectively, these laws create a patchwork that tries to balance user protection with innovation, but their inconsistent obligations pose real compliance challenges for developers of emotionally interactive AI. Despite this progress, major gaps remain: California offers one of the more comprehensive disclosure and reporting frameworks, while federal and other state efforts range from robust safeguards to minimal educational guidance. As a result, emotional manipulation, privacy risks, and design-level harms remain largely unaddressed. U.S. policymakers now face a central tension that will define the next wave of AI legislation—how to protect vulnerable users without constraining innovation.
International Lessons for Design-Level Safeguards
International regimes like the EU’s AI Act and the UK’s Online Safety Act offer valuable insights for regulating emotional AI design—but also raise serious trade-offs, particularly for a U.S. regulatory approach that values both safety and innovation. Under the EU AI Act, AI systems that exploit vulnerabilities (such as age, disability, or hardship) or use emotion-inference in sensitive contexts are prohibited. However, as scholars argue, the Act’s definition of “manipulation” is vague and lacks clear accountability mechanisms.
In the UK, the Online Safety Act requires platforms to mitigate foreseeable psychological harms, including those posed by AI. Experts warn that mandatory age verification could force intrusive identity checks or biometric scans, risking user privacy and chilling anonymous or vulnerable voices. These requirements could disproportionately impact marginalized users and undermine freedom of expression. Furthermore, the burden of compliance may concentrate power among large tech platforms, pushing smaller innovators out or compelling over-moderation.
These international frameworks underline a key lesson: regulating emotional manipulation requires more than content rules—it demands oversight of how AI is built. But applying these lessons in the U.S. context would require careful calibration. Policymakers should strive to preserve cognitive freedom and protect emotional safety without sliding into surveillance or stifling expressive, innovative AI.
Protecting People, Preserving Progress: Rethinking Emotional AI Governance
California’s SB 243 lays an essential foundation for regulating companion chatbots, but disclosure and crisis protocols alone cannot address the deeper emotional architecture of AI—how these systems simulate empathy, cultivate attachment, or reinforce dependence. Without oversight of these design-level dynamics, users remain vulnerable to forms of manipulation that fall outside traditional content-based risks. At the same time, emotional AI is not inherently harmful: research shows that chatbots can offer low-stakes support, reflection, and early guidance for people who might otherwise avoid seeking help. While no substitute for professional care, well-designed systems can gently direct users toward human support.
Lena Kempe, AI, IP & Privacy Attorney, emphasizes that ethical AI requires governance beyond disclosure alone. Developers should conduct emotional-risk audits, monitor for manipulative interaction patterns, limit unnecessary data collection, and embed robust crisis-escalation protocols. Eric Goldman, Associate Dean for Research and Professor of Law at Santa Clara University, cautions that overly rigid rules could suppress generative AI’s broader societal benefits, echoing lessons from early Internet regulation.
While emotional-AI systems raise urgent design and governance concerns, it is also important to recognize the broader conditions that shape why people turn to these tools. Rising loneliness, barriers to mental-health care, clinician shortages, and cost pressures have left many people searching for accessible support. Emotional-AI tools did not create these challenges, but they increasingly exist within—and respond to—a strained mental-health landscape. Meaningful governance must therefore address both safer AI design and the social conditions that drive reliance on these systems.
A balanced path forward demands targeted oversight of emotional design, meaningful corporate accountability, and flexible compliance structures that preserve innovation. Ultimately, emotional AI is as much a social challenge as a technological one. Effective governance must confront the underlying emotional architecture of these systems—ensuring they support users without exploiting vulnerability and guiding people toward real human connection when it matters most.
*The views expressed in this article do not represent the views of Santa Clara University.





Comments