🤖 AI Summary
Commercial LLM safety guardrails suffer from a lack of high-quality, fine-grained annotated datasets. Method: We propose a comprehensive LLM safety risk taxonomy covering 12 coarse-grained categories and 9 fine-grained subcategories, and release Aegis 2.0—a high-quality dataset comprising 34,248 human–AI interaction samples. Our approach introduces a novel “safety + topic adherence” hybrid training paradigm and a mixed-generation pipeline integrating human annotation with multi-LLM jury-based evaluation, combined with Parameter-Efficient Fine-Tuning (PEFT) and risk-aware supervised training. Contribution/Results: Experiments demonstrate that lightweight models trained on Aegis 2.0 match the performance of state-of-the-art (SOTA) safety models trained via full-parameter fine-tuning, while significantly improving generalization to unseen safety risks. Both the dataset and trained models are publicly released to advance standardization of industrial-grade LLM safety guardrails.
📝 Abstract
As Large Language Models (LLMs) and generative AI become increasingly widespread, concerns about content safety have grown in parallel. Currently, there is a clear lack of high-quality, human-annotated datasets that address the full spectrum of LLM-related safety risks and are usable for commercial applications. To bridge this gap, we propose a comprehensive and adaptable taxonomy for categorizing safety risks, structured into 12 top-level hazard categories with an extension to 9 fine-grained subcategories. This taxonomy is designed to meet the diverse requirements of downstream users, offering more granular and flexible tools for managing various risk types. Using a hybrid data generation pipeline that combines human annotations with a multi-LLM"jury"system to assess the safety of responses, we obtain Aegis 2.0, a carefully curated collection of 34,248 samples of human-LLM interactions, annotated according to our proposed taxonomy. To validate its effectiveness, we demonstrate that several lightweight models, trained using parameter-efficient techniques on Aegis 2.0, achieve performance competitive with leading safety models fully fine-tuned on much larger, non-commercial datasets. In addition, we introduce a novel training blend that combines safety with topic following data.This approach enhances the adaptability of guard models, enabling them to generalize to new risk categories defined during inference. We plan to open-source Aegis 2.0 data and models to the research community to aid in the safety guardrailing of LLMs.