Regulating AI Behavior, Not AI Theory
The new laws, SB 243 and AB 489, share a common assumption: that AI systems will encounter edge cases. Experts and lawmakers see functionality issues where conversations will drift, and users will bring emotional, medical or high-stakes questions into contexts the system was not designed to address.
Static policies written months earlier will not cover every scenario. So, rather than banning conversational AI, California’s approach is pragmatic. If an AI system influences decisions or builds emotional rapport with users, it must have safeguards that hold up in production, not just in documentation. And this is an area where many organizations are least prepared.
SB 243: When a Chatbot Becomes a Companion
SB 243, signed in October 2025, targets what lawmakers call “companion AI,” or systems designed to engage users over time rather than answer a single transactional question. These systems can feel persistent, responsive and emotionally attuned. Over time, users may stop perceiving them as tools and start treating them as a presence. That is precisely the risk SB 243 attempts to address.
The law establishes three core expectations.
First, AI disclosure must be continuous, not cosmetic. If a reasonable person could believe they are interacting with a human, the system must clearly disclose that it is AI, not just once, but repeatedly during longer conversations. For minors, the law goes further, requiring frequent reminders and encouragement to take breaks, explicitly aiming to interrupt immersion before it becomes dependence.
Second, the law assumes some conversations will turn serious. When users express suicidal thoughts or self-harm intent, systems are expected to recognize that shift and intervene. That means halting harmful conversational patterns, triggering predefined responses and directing users to real-world crisis support. These protocols must be documented, implemented in practice and reported through required disclosures.
Third, accountability does not stop at launch. Beginning in 2027, operators must report how often these safeguards are triggered and how they perform in practice. SB 243 also introduces a private right of action, significantly raising the stakes for systems that fail under pressure.
The message from this governance is clear: Good intentions are not enough if the AI says the wrong thing at the wrong moment.
READ MORE: Here is a guide to AI governance for state and local agencies.
AB 489: When AI Sounds Like a Doctor
AB 489 focuses on a different risk: AI systems that imply medical expertise without actually having it. Many health and wellness chatbots do not explicitly claim to be doctors. Instead, they rely on tone, terminology or design cues that feel clinical and authoritative. For users, those distinctions are often invisible or undecipherable.
Starting Jan. 1, AB 489 prohibits AI systems from using titles, language or other representations that suggest licensed medical expertise unless that expertise is genuinely involved.
Describing outputs as “doctor-level” or “clinician-guided” without factual backing may constitute a violation. Even small cues that could mislead users may count as violations, with enforcement extending to professional licensing boards. For teams building patient-facing or health-adjacent AI, this creates a familiar engineering challenge: developing tech that walks a fine line between being informative and helpful versus authoritative. And now, under AB 489, that line matters.
From Governance Frameworks to Runtime Control
Taken together, SB 243 and AB 489 mark a shift in how AI governance will be enforced; for now, only at the state level. Regulators are no longer evaluating policy statements or internal guidelines but instead looking at live behavior. The focus is on what the AI actually says, in context, when users interact with it.
These new laws move AI governance out of compliance binders and into production systems.
For most organizations, compliance does not require rebuilding models from scratch. It requires control at runtime — essentially, the ability to intercept unsafe, misleading or noncompliant outputs before they reach users, and to adjust behavior as regulations evolve.
This is where AI security and AI governance converge.
Runtime guardrails make regulatory requirements actionable. Instead of hoping a model behaves as intended, teams can define explicit boundaries for sensitive scenarios, monitor interactions as they happen, and intervene when conversations drift into risk.
The priority could not be more clear. It is not about intent, but control.
DIVE DEEPER: Governments embrace AI for improving digitals services.
A Special Note on the Federal Policy Picture
A December 2025 executive order directing federal agencies to review state-level AI regulations has raised questions about preemption. For now, the operational reality remains straightforward. Executive orders do not override state law. There is no federal AI statute that preempts California’s rules. Despite the executive order, SB 243 and AB 489 still took effect on Jan. 1. Teams planning their next quarter should assume those deadlines hold.
California’s AI laws are among the first to treat guardrails as something that must function under real-world pressure, not just exist on paper.
Organizations that invest now in controlling AI behavior in production will not only be prepared for the new policy realities of 2026, but will also be better positioned for the next wave of AI regulation.
If your AI systems already talk to users, this is the moment to decide what they are allowed to say — and perhaps more important, what should never leave the system at all.
