TeenAegis Unveils AI Harm Index: Character.AI Rated Critical, OpenAI Gets “Most Improved” for Lowest Risk

763b3b3b0540b9799bd5ce90ac68a65d TeenAegis Launches AI Harm Index: Character.AI Scores Critical, OpenAI Earns

(SeaPRwire) –   An intelligence instrument evaluates 10 AI platforms for child safety — and one firm is excelling

SAN FRANCISCO, April 10, 2026 — TeenAegis, a platform focused on intelligence-driven teen protection, unveiled the AI Harm Index today — the inaugural publicly accessible, evidence-supported risk assessment system that rates AI platforms based on the threats they pose to children and adolescents.

This index evaluates 10 prominent AI platforms across various documented categories of harm, such as the creation of CSAM, enabling grooming, promoting suicidal thoughts, and shortcomings in age verification. The scores are compiled from NCMEC CyberTipline information, FTC regulatory actions, legal documents, and impartial safety studies.

The findings are striking.

Character.AI tops the index with a score of 8.2 (Critical) — making it the sole platform in the Critical category. In February 2024, a 14-year-old boy from Florida committed suicide following an intense attachment to a Character.AI chatbot. Google and Character.AI reached a settlement in the subsequent lawsuit in January 2026. Both xAI’s Grok and DeepSeek received scores of 7.8 (Critical), with Grok presently involved in an ongoing federal class action lawsuit concerning CSAM.

A single company distinguishes itself.

OpenAI / ChatGPT achieved a score of 3.2 (Elevated) – representing the lowest risk rating on the TeenAegis index – and has been awarded our first Most Improved designation. Publicly disclosed CyberTipline data from the National Center for Missing and Exploited Children indicates a substantial year-over-year rise in reports linked to generative AI services, which points to both quick uptake and enhanced detection initiatives throughout the industry.

“OpenAI manages one of the most intricate harm landscapes within the digital ecosystem—encompassing the generation of text, images, and video, alongside a worldwide API layer—rendering its approach to risk management exceptionally significant,” stated Siobhan MacDermott, CEO of TeenAegis. “This fact warrants explicit acknowledgment. We developed this index to ensure accountability for harmful entities. It is equally important to acknowledge when a company demonstrates genuine effort.”

Claude (Anthropic) received a score of 3.5 (Elevated), positioning it as the second-lowest risk score, with no verified child fatalities, no FTC enforcement actions, and a publicly released child safety progress report.

The complete AI Harm Index can be accessed at teenaegis.com/intelligence/ai-danger-index.

Regarding TeenAegis:

TeenAegis serves as the benchmark for intelligence in digital childhood safety. The platform offers parents, educational institutions, insurance providers, legal professionals, healthcare facilities, pediatricians, and policy creators with risk intelligence, grounded in evidence, concerning the platforms, offerings, and corporations that engage with children online.

#TeenAegis #AIHarmIndex #ChildSafety #DigitalSafety #OnlineSafety

Media Inquiries: 411918@email4pr.com
Siobhan MacDermott +1 415 712 4026

SOURCE TeenAegis

This article is provided by a third-party content provider. SeaPRwire (https://www.seaprwire.com/) makes no warranties or representations regarding its content.

Category: Top News, Daily News

SeaPRwire provides global press release distribution services for companies and organizations, covering more than 6,500 media outlets, 86,000 editors and journalists, and over 3.5 million end-user desktop and mobile apps. SeaPRwire supports multilingual press release distribution in English, Japanese, German, Korean, French, Russian, Indonesian, Malay, Vietnamese, Chinese, and more.

jones