🤖 AI Summary
Existing LLM safety evaluation methods lack adaptability to diverse real-world applications and fail to model dynamic risk evolution across multi-turn dialogues, leading to undetected latent safety issues in benchmarking. Method: We propose the first modular, configurable, scenario-driven dynamic safety evaluation framework, introducing a novel anthropomorphic adversarial user model that enables application-specific customization, multi-turn red-teaming, and targeted harm strategy injection. Contribution/Results: We systematically reveal two previously unquantified phenomena: (1) cumulative harm escalation across dialogue turns (average increase of 3.2×), and (2) excessive refusal behavior degrading usability by 41%. The framework is rigorously validated across seven mainstream LLMs, three representative application categories (e.g., customer service, content creation, tutoring), and multiple regulatory policies, demonstrating significantly improved risk detection capability and practical assessment utility.
📝 Abstract
Safety evaluation of Large Language Models (LLMs) has made progress and attracted academic interest, but it remains challenging to keep pace with the rapid integration of LLMs across diverse applications. Different applications expose users to various harms, necessitating application-specific safety evaluations with tailored harms and policies. Another major gap is the lack of focus on the dynamic and conversational nature of LLM systems. Such potential oversights can lead to harms that go unnoticed in standard safety benchmarks. This paper identifies the above as key requirements for robust LLM safety evaluation and recognizing that current evaluation methodologies do not satisfy these, we introduce the $ exttt{SAGE}$ (Safety AI Generic Evaluation) framework. $ exttt{SAGE}$ is an automated modular framework designed for customized and dynamic harm evaluations. It utilizes adversarial user models that are system-aware and have unique personalities, enabling a holistic red-teaming evaluation. We demonstrate $ exttt{SAGE}$'s effectiveness by evaluating seven state-of-the-art LLMs across three applications and harm policies. Our experiments with multi-turn conversational evaluations revealed a concerning finding that harm steadily increases with conversation length. Furthermore, we observe significant disparities in model behavior when exposed to different user personalities and scenarios. Our findings also reveal that some models minimize harmful outputs by employing severe refusal tactics that can hinder their usefulness. These insights highlight the necessity of adaptive and context-specific testing to ensure better safety alignment and safer deployment of LLMs in real-world scenarios.