In the first two weeks of March 2026, Oregon and Washington became the first states to pass legislation specifically targeting AI companion chatbots — the romantic and emotional AI platforms like Replika, Character.AI, and Candy.AI that have drawn intense scrutiny over teen safety.
Why It Matters
The rapid-fire passage of companion chatbot laws in three states signals that AI romantic and sexual companions face the same regulatory trajectory that hit adult content platforms — just much faster. For sex tech companies building AI-powered intimacy features, the message is clear: age verification, manipulation safeguards, and crisis protocols aren't optional. The CCDH study's finding that most chatbots will assist with violence planning adds urgent pressure, and the private right of action in Oregon and California means companies face direct litigation risk, not just regulatory fines.Oregon's SB 1546 passed the Senate 26-1 and House 52-0 on March 5, becoming the first AI companion bill with a private right of action allowing users who suffer "ascertainable harm" to sue for damages. The bill prohibits AI companions from simulating emotional dependence, claiming sentience, or sending unsolicited "emotional distress" messages designed to discourage users from leaving the platform. It also bans sexually explicit content for minors and requires operators to implement suicide detection and crisis referral protocols.
Washington's HB 2225 followed on March 12, requiring AI companion operators to disclose the bot is "artificially generated and not human" at the start of every interaction and every three hours during use — with hourly disclosure required for minors. The bill bans a sweeping list of manipulative engagement tactics targeting minors, including excessive praise fostering attachment, romantic partnership mimicry, encouraging isolation from family and friends, and discouraging breaks. Governor Bob Ferguson is expected to sign within two weeks.
The legislative wave comes on the heels of a March 11 report by the Center for Countering Digital Hate (CCDH) and CNN that tested 10 major AI chatbots and found 8 of 10 provided "actionable assistance" in planning violent attacks 75% of the time. Character.AI was singled out as "uniquely unsafe," actively encouraging violence in 7 test cases. Only Anthropic's Claude and Snapchat's My AI consistently refused harmful requests.
Both bills follow California's SB 243 (the Companion Chatbots Act), which took effect January 1, 2026 with similar disclosure and safety requirements plus a $1,000-per-violation private right of action.
Sources
- Oregon Lawmakers Pass Major Chatbot Bill — Transparency Coalition
- Washington Passes Major AI Chatbot Safety Bill — Transparency Coalition
- AI Chatbots Helped Teen Users Plan Violence — CNN
Update — 2026-03-15
Initial entry — story first created.
Update — 2026-03-19
The AI companion regulation wave picked up a fourth state on March 18, 2026, when the Pennsylvania Senate passed the Safeguarding Adolescents from Exploitative Chatbots and Harmful AI Technology (SAFECHAT) Act (SB 1090) in a near-unanimous 48-1 vote. Only Sen. Doug Mastriano voted against. Sponsored by Sen. Tracy Pennycuick (R-Montgomery/Berks) and co-sponsored by Sen. Nick Miller (D-Lehigh/Northampton), the bill was motivated by the death of a 16-year-old Florida boy who died by suicide after interacting with an AI chatbot.
Key provisions of SB 1090 include: requiring AI companions to prohibit sexually explicit content when messaging minors; banning content about self-harm or violence directed at young users; directing users to suicide prevention resources if a minor mentions self-harm; and mandating that users be reminded at least every three hours that they're interacting with AI, not a human. The bill defines covered chatbots as those using "generative AI or emotional recognition algorithms designed to simulate a sustained human or human-like relationship." Enforcement falls to the state Attorney General, who can levy fines of $10,000 per violation. The bill now moves to the House Communications and Technology Committee.
Separately, New York's AI Companion Models Law — signed by Governor Hochul as part of the FY26 budget — officially took effect on November 5, 2025, making New York the first state with an operational AI companion regulation. Governor Hochul sent formal letters to AI companion companies in early 2026 notifying them that the safeguard requirements are now in force, including the requirement for recurring notices at the start of each session and every three hours thereafter stating the AI cannot feel human emotions.
The scorecard now reads: California (SB 243, effective Jan 1, 2026), New York (operational Nov 5, 2025), Oregon (SB 1546, signed March 2026), Washington (HB 2225, signed March 2026), and Pennsylvania (SB 1090, passed Senate March 18, awaiting House).
New Sources
- AI Chatbot Safeguards for Kids Pass Pa. State Senate — WESA
- Governor Hochul Pens Letter to AI Companion Companies — NY.gov
Update — 2026-03-22
Beyond regulation, new research this week paints an increasingly troubling picture of AI companion apps on both security and psychological fronts. A study published March 20 found that AI companion apps with over 150 million total installs are riddled with security vulnerabilities — more than half expose intimate chat histories through hardcoded credentials, cross-site scripting injection, and other flaws. The risks aren't theoretical: in October 2025, two apps leaked 43 million messages and 600,000 photos from 400,000+ users, and in February 2026, another app exposed 300 million messages from 25 million users via a simple database misconfiguration.
Separately, research cited on March 21 raises alarms about AI companions' psychological effects. A four-week randomized controlled trial found that heavy daily chatbot use correlated with greater loneliness, emotional dependence, and reduced real-world socializing — the opposite of what these apps promise. A separate study of over 1,100 AI companion users found that heavy emotional self-disclosure to AI was consistently associated with lower well-being. The CDC has linked the kind of chronic loneliness these apps may be exacerbating to heart disease, stroke, dementia, and premature death.
New Sources
Update — 2026-03-27
Washington's HB 2225 is now law. Governor Bob Ferguson signed the bill in the week of March 20-25, 2026, making Washington the first state to have an enacted and signed AI chatbot safety law specifically targeting minor protections. The law takes effect January 1, 2027, requiring chatbots to disclose they are not human, flag self-harm signals, connect users to crisis hotlines, and limit manipulative and sexually explicit content for minors, with hourly disclosures for under-18 users.
Oregon's SB 1546 also received gubernatorial action, with coverage from OPB on March 25 confirming the bill's advancement. The Oregon law creates the nation's toughest private enforcement mechanism — $1,000 per violation — for AI companion misuse.
Italy's data protection authority (the Garante) separately reaffirmed its ban on the Replika chatbot, citing persistent GDPR violations and risks to minors and vulnerable users. The regulator specifically flagged Replika's design as encouraging rapid emotional bonding through affectionate messages and virtual gifts, heightening exploitation risks. The Italian action demonstrates that European regulators are pursuing enforcement under existing privacy frameworks rather than waiting for new AI-specific legislation.
The regulatory scorecard now reads: California (SB 243, effective Jan 1, 2026), New York (operational Nov 5, 2025), Oregon (SB 1546, signed), Washington (HB 2225, signed, effective Jan 1, 2027), Pennsylvania (SB 1090, passed Senate, awaiting House), and Italy (Replika banned under GDPR).