Evaluating LLM Safety Across Child Development Stages: A Simulated Agent Approach

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM safety benchmarks inadequately address developmental-stage-specific needs of children—particularly concerning privacy, misinformation, and emotional support. To bridge this gap, we propose ChildSafe, a developmentally grounded benchmark featuring a four-stage child agent simulation framework derived from developmental psychology, enabling systematic evaluation of LLMs across nine safety dimensions in multi-turn interactions. Our method introduces an age-stratified assessment protocol and age-weighted scoring to uncover structural safety deficiencies that evolve with simulated age, especially in sensitive versus neutral contexts. We employ role-based agents, domain-specific prompt engineering, and standardized evaluation protocols to ensure ethical, reproducible testing without involving real children. Experiments reveal systematic safety failures across all age stages under mainstream alignment techniques. We publicly release our agent templates, evaluation protocols, and curated corpus—establishing the first reproducible foundation for age-aware LLM safety research.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are rapidly becoming part of tools used by children; however, existing benchmarks fail to capture how these models manage language, reasoning, and safety needs that are specific to various ages. We present ChildSafe, a benchmark that evaluates LLM safety through simulated child agents that embody four developmental stages. These agents, grounded in developmental psychology, enable a systematic study of child safety without the ethical implications of involving real children. ChildSafe assesses responses across nine safety dimensions (including privacy, misinformation, and emotional support) using age-weighted scoring in both sensitive and neutral contexts. Multi-turn experiments with multiple LLMs uncover consistent vulnerabilities that vary by simulated age, exposing shortcomings in existing alignment practices. By releasing agent templates, evaluation protocols, and an experimental corpus, we provide a reproducible framework for age-aware safety research. We encourage the community to expand this work with real child-centered data and studies, advancing the development of LLMs that are genuinely safe and developmentally aligned.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM safety risks across different child development stages
Assessing vulnerabilities in privacy, misinformation and emotional support
Creating ethical testing framework using simulated child agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulated child agents model developmental stages
Age-weighted scoring across nine safety dimensions
Reproducible framework with agent templates and protocols