CASTLE: A Comprehensive Benchmark for Evaluating Student-Tailored Personalized Safety in Large Language Models

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitations of current large language models (LLMs) in educational settings, where a one-size-fits-all response mechanism fails to account for students’ cognitive and psychological differences, thereby compromising personalized safety. To bridge this gap, the work introduces the concept of “student-customized personalized safety” and presents CASTLE—the first fine-grained evaluation benchmark integrating 14 student attributes and 15 categories of educational safety risks across 92,908 bilingual scenarios. It further proposes three novel metrics: risk sensitivity, emotional empathy, and student alignment. Grounded in educational psychology theories and multidimensional student profiling, the study systematically evaluates 18 state-of-the-art LLMs, revealing that all achieve average safety scores below 2.3 out of 5, underscoring a critical deficiency in their capacity to ensure personalized safety for diverse learners.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have advanced the development of personalized learning in education. However, their inherent generation mechanisms often produce homogeneous responses to identical prompts. This one-size-fits-all mechanism overlooks the substantial heterogeneity in students cognitive and psychological, thereby posing potential safety risks to vulnerable groups. Existing safety evaluations primarily rely on context-independent metrics such as factual accuracy, bias, or toxicity, which fail to capture the divergent harms that the same response might cause across different student attributes. To address this gap, we propose the concept of Student-Tailored Personalized Safety and construct CASTLE based on educational theories. This benchmark covers 15 educational safety risks and 14 student attributes, comprising 92,908 bilingual scenarios. We further design three evaluation metrics: Risk Sensitivity, measuring the model ability to detect risks; Emotional Empathy, evaluating the model capacity to recognize student states; and Student Alignment, assessing the match between model responses and student attributes. Experiments on 18 SOTA LLMs demonstrate that CASTLE poses a significant challenge: all models scored below an average safety rating of 2.3 out of 5, indicating substantial deficiencies in personalized safety assurance.
Problem

Research questions and friction points this paper is trying to address.

personalized safety
student heterogeneity
educational safety risks
large language models
safety evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Student-Tailored Personalized Safety
CASTLE benchmark
Risk Sensitivity
Emotional Empathy
Student Alignment
🔎 Similar Papers
No similar papers found.