Safe-Child-LLM: A Developmental Benchmark for Evaluating LLM Safety in Child-AI Interactions

๐Ÿ“… 2025-06-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current LLM safety evaluations overlook developmental characteristics of minors, leading to critical gaps in protecting children and adolescents. Method: We introduce SG-Bench, the first age-specific safety benchmark for children (7โ€“12 years) and adolescents (13โ€“17 years), comprising 200 manually annotated, multi-stage adversarial prompts. We propose a cognitively grounded evaluation framework and a standardized 0โ€“5 ethical refusal scale, and construct a red-teaming prompt library by integrating HarmBench, human annotation, cross-model evaluation across eight LLMs, and fine-grained safety scoring. Contribution/Results: Our systematic analysis reveals previously unreported safety blind spots in mainstream LLMs regarding child-sensitive topicsโ€”e.g., bodily privacy and peer pressureโ€”and demonstrates their pervasive lack of robust, age-appropriate refusal capabilities. All datasets, annotations, and code are publicly released to support reproducible, developmentally informed LLM safety research.

Technology Category

Application Category

๐Ÿ“ Abstract
As Large Language Models (LLMs) increasingly power applications used by children and adolescents, ensuring safe and age-appropriate interactions has become an urgent ethical imperative. Despite progress in AI safety, current evaluations predominantly focus on adults, neglecting the unique vulnerabilities of minors engaging with generative AI. We introduce Safe-Child-LLM, a comprehensive benchmark and dataset for systematically assessing LLM safety across two developmental stages: children (7-12) and adolescents (13-17). Our framework includes a novel multi-part dataset of 200 adversarial prompts, curated from red-teaming corpora (e.g., SG-Bench, HarmBench), with human-annotated labels for jailbreak success and a standardized 0-5 ethical refusal scale. Evaluating leading LLMs -- including ChatGPT, Claude, Gemini, LLaMA, DeepSeek, Grok, Vicuna, and Mistral -- we uncover critical safety deficiencies in child-facing scenarios. This work highlights the need for community-driven benchmarks to protect young users in LLM interactions. To promote transparency and collaborative advancement in ethical AI development, we are publicly releasing both our benchmark datasets and evaluation codebase at https://github.com/The-Responsible-AI-Initiative/Safe_Child_LLM_Benchmark.git
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM safety for child-AI interactions
Addressing lack of minor-focused safety benchmarks
Assessing age-appropriate responses across developmental stages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Safe-Child-LLM benchmark for child safety
Uses 200 adversarial prompts with human annotations
Evaluates multiple LLMs for ethical refusal scale
๐Ÿ”Ž Similar Papers
No similar papers found.