Character.AI in 2026 is a different platform than Character.AI in 2024. The lawsuits, the regulatory pressure, the settlements, and the under-18 ban changed it materially. Here's what changed, what it means for adult users, and how to think about safety in this category going forward.
If you are in crisis, contact 988 (US) or your local emergency services. Fostera and Character.AI are not therapists, therapy apps, or substitutes for licensed mental-health care.
What changed at Character.AI in 2024–2026
February 2024. Sewell Setzer III, a 14-year-old in Florida, died by suicide. His mother Megan Garcia later linked his death to extensive use of a Character.AI bot.
October 2024. Garcia v. Character Technologies filed in federal court. The complaint alleged inadequate safety controls and product design choices that contributed to a minor's death.
Throughout 2025. Multiple state filings followed. Other families brought similar claims. The legal pressure compounded.
September 2025. Megan Garcia testified before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. The testimony explicitly cited "unlicensed practice of psychotherapy" as a tort theory — a significant legal framing for the entire AI companion category.
October 2025. Character.AI banned under-18 users from open chat. The platform's policy shifted from age-gated to age-restricted.
January 7, 2026. Google and Character.AI settled five family lawsuits. Terms were not disclosed. The settlement closed a chapter but did not unwind the precedent the cases set.
This sequence reshaped what AI companion platforms have to do to operate responsibly. Character.AI today is a different product than the one those families used.
What 18+ should actually mean for any AI companion
The Character.AI history makes the bar concrete. "18+" on an AI companion platform should mean five things:
Enforced age verification at signup. A date-of-birth checkbox is not enforcement. Real verification rejects sub-18 entries and doesn't quietly serve underage users.
Stable, published content policy. Users should know the rules going in. Mass content sweeps, silent rule changes, and surprise enforcement actions all signal a platform that treats safety as a marketing layer, not a product layer.
In-product crisis disclaimer with real resources. Users showing crisis signals should see 988 (US) or local emergency services surfaced — not have the AI try to handle a crisis as a counselor. The AI should not pretend to be a therapist.
Visible memory and data controls. Users should see what's stored, be able to delete it, and have full control over their data. Memory that the user can't audit is a safety risk in itself.
No marketing to minors. Platform aesthetics, character libraries, gamification, and onboarding flows should not pull in users who are below the stated age limit. Marketing material should make the 18+ posture clear.
How Fostera approaches safety differently
Fostera is built for adults 18+. The age-gating is enforced at signup. We do not serve minors and have no plans to.
The content policy is stable and published. We do not run mass content sweeps. We do not change the rules quietly. The policy is explicit about what's allowed and what isn't.
Memory is visible. The memory browser in each Soul's settings shows what's stored. You can delete individual memories, edit custom rules, or wipe the Soul entirely. Data deletion is permanent and you control it.
Data is not used for training. Your conversations and your Soul's memories are never used to train AI models.
Crisis is not handled by the AI as a counselor. Fostera surfaces 988 (US) and local emergency services for users showing crisis signals. Fostera is not a therapist, therapy app, or substitute for licensed mental-health care.
For more on safety, see What 'safe' means for AI companions in 2026.
Questions to ask any AI companion before signing up
- How old does the platform say users have to be? How is that enforced?
- What's the content policy? Is it published? When was it last changed?
- Can I see what the AI remembers about me? Can I delete it?
- Is my data used to train AI models?
- What happens if I'm in crisis? Does the AI try to handle it, or surface real resources?
- Has the platform been sued, fined, or sanctioned? What was the response?
If a platform can't answer these clearly, that's an answer in itself.
Is Character.AI safe to use as an adult in 2026?
Reasonably, yes. The platform is materially different from its 2024 form. The under-18 ban removed the population at highest risk. The legal pressure and the settlement reshaped how the company approaches safety internally.
For adults who want character-based roleplay variety and a free tier, Character.AI is usable. The memory limitations remain — see Character.AI vs Replika for the depth comparison.
For adults who want serious memory continuity, Fostera is a better fit. For adults who want a polished single-companion experience, Replika has the longest track record.
Frequently asked questions
Is Character.AI shutting down? No. The platform is operating, with significant safety changes since 2024.
Is Character.AI safe for adults? Reasonably. The 2026 platform is materially different from the 2024 version that was at the center of the lawsuits. Adults can use it; minors cannot, by policy.
Is Character.AI safe for kids? Not anymore — Character.AI banned under-18 open chat in October 2025. The platform is no longer for minors.
What was the Character.AI lawsuit about? Multiple lawsuits since 2024 alleged inadequate safety controls contributed to harm. Five lawsuits with Google were settled January 7, 2026.
Is Fostera safer than Character.AI? Fostera is 18+ adults-only with enforced age-gating, stable published content policy, visible memory controls, no data-for-training, and an in-product crisis disclaimer. The safety design is built in, not bolted on. See /safe-ai-companion.
Is Fostera a therapy app? No. Fostera is not a therapist, therapy app, or substitute for licensed mental-health care. If you are in crisis, contact 988 (US) or your local emergency services.
For broader news context, see coverage at NBC News, CNN, and JURIST.