Legal & Compliance | 4 min read

Connecticut AI Safety Bill Passes Both Chambers, Covering Chatbots, Hiring Decisions, and Deepfakes

Connecticut's SB5 cleared both legislative chambers on May 1, 2026, establishing AI obligations around companion chatbots, automated hiring, and synthetic content — making it one of the most comprehensive state AI laws in the country.

Hector Herrera
Hector Herrera
A law office featuring screen, documents, related to Connecticut AI Safety Bill Passes Both Chambers, Covering Ch
Why this matters Connecticut's SB5 cleared both legislative chambers on May 1, 2026, establishing AI obligations around companion chatbots, automated hiring, and synthetic content — making it one of the most comprehensive state AI laws in the country.

Connecticut AI Safety Bill Passes Both Chambers, Covering Chatbots, Hiring Decisions, and Deepfakes

By Hector Herrera | May 11, 2026 | Legal

Connecticut's legislature passed one of the most comprehensive state AI safety laws in the country on May 1, 2026, sending SB5 — "An Act Concerning Online Safety" — to Governor Ned Lamont for signature. The bill covers companion chatbots, automated employment decision systems, and synthetic digital content, making Connecticut a benchmark state in AI regulation ahead of the federal preemption fight currently moving through Congress.

The vote positions Connecticut alongside Colorado and California as a state willing to regulate AI systems directly — and sets the stage for a confrontation with tech industry groups pushing for a single federal standard that would override state laws.

What the Bill Does

SB5 creates distinct obligations across three categories of AI deployment:

Companion chatbots — AI systems designed to simulate emotional connection or personal relationships — must disclose their artificial nature to users and are prohibited from exploiting psychological vulnerabilities. The provision targets apps like Replika and Character.ai, which have faced scrutiny over their impact on minors and vulnerable adults.

Automated employment decision systems — AI tools used in hiring, promotion, or termination decisions — must meet bias audit requirements and provide human review upon request. Employers using AI to screen resumes or rank candidates must disclose this practice to applicants.

Synthetic digital content — including AI-generated images, audio, and video — must be labeled as artificially generated when used in specific contexts including political advertising and commercial endorsements. The provision is partly a response to the rising volume of deepfake imagery appearing in insurance fraud claims and political campaigns.

The bill includes safe harbors for developers who comply with the disclosure and audit requirements, a deliberate design choice intended to avoid chilling AI development in the state while still establishing enforceable standards.

The Regulatory Context

Connecticut's action is not happening in isolation. Kelley Drye's regulatory tracking documents simultaneous developments across at least three states:

  • Colorado is debating whether to repeal portions of its 2024 AI Act following industry pressure and concerns about compliance complexity.
  • California continues expanding AI-specific requirements through multiple active bills, including requirements on AI-generated content labeling and algorithmic discrimination in housing and credit.
  • Connecticut is now moving from a state with general consumer protection application to one with specific AI statutory obligations.

The federal dimension is directly relevant: Congress is considering preemption language in federal AI legislation that would prevent states from enforcing their own AI-specific rules. If passed, SB5 would potentially be overridden. The state-federal tension is not a future risk — it is an active policy conflict with a near-term resolution timeline.

Why This Matters to Businesses

Employers using AI in hiring face the most immediate compliance exposure under SB5 if the governor signs it. Connecticut is a major hub for financial services, insurance, and biotech — sectors that have broadly adopted AI-assisted recruiting tools. The bias audit requirement and human review obligation are not symbolic: they require documented processes and create liability if violated.

Consumer-facing AI companies — particularly those running chatbot or companion AI products — will need to evaluate whether Connecticut's disclosure requirements apply to their existing products, and whether engineering changes are required before the law takes effect.

AI developers broadly should note the safe harbor structure. The bill is not a technology ban — it is a disclosure and accountability framework. Companies that proactively document compliance with its terms are explicitly protected from some categories of enforcement action.

The Companion Chatbot Provision Is New Territory

Most AI legislation to date has focused on hiring discrimination, deepfakes, or general transparency requirements. Connecticut's specific treatment of companion chatbots — AI designed to form emotional relationships with users — is among the first in U.S. law.

The practical impact targets a growing category of products that occupy a gray zone between entertainment, mental health support, and social connection. Regulatory ambiguity in this space has allowed rapid commercial expansion; SB5 establishes minimum disclosure floors that other states are likely to observe as a model.

The concern driving the provision is well-documented: some users, particularly teenagers and adults experiencing isolation or mental health challenges, form strong emotional attachments to AI companions. When those systems are not clearly disclosed as artificial — or when they are designed to maximize engagement over user wellbeing — the harm potential is meaningful. Connecticut legislators opted to regulate disclosure and vulnerability exploitation rather than ban the category.

What Comes Next

Governor Lamont has not publicly indicated whether he will sign the bill. The tech industry has been active in state AI lobbying and may seek amendments or a veto. Watch for:

  • The governor's decision timeline. SB5 has a 2026 legislative calendar, so any delay or veto could push action to the next session.
  • Federal preemption developments. If Congress passes AI legislation with preemption language before SB5's compliance provisions take effect, the state framework may not survive implementation.
  • Other state responses. A Connecticut signature would likely accelerate AI bill movement in New Jersey, Massachusetts, and New York — all of which have active AI legislative sessions. A veto would have the opposite effect, giving hesitant legislators cover to wait for federal action.

The state-by-state AI regulatory patchwork is not going away in the near term. Companies operating across state lines should be building compliance programs that can accommodate heterogeneous requirements rather than waiting for a single federal standard that may arrive after significant state law is already in force.


Source: Kelley Drye & Warren — AI Regulatory Roundup

Key Takeaways

  • By Hector Herrera | May 11, 2026 | Legal
  • Automated employment decision systems
  • Synthetic digital content
  • Employers using AI in hiring
  • Consumer-facing AI companies

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron