π€ AI Summary
This work addresses the limitations of existing content moderation approaches for large language models, which predominantly rely on fixed binary classification strategies and struggle to adapt to varying platform policies and temporal shifts in strictness requirements, thereby exhibiting limited robustness. To overcome this, we propose FlexGuardβa moderation system based on continuous risk scoring that aligns risk estimates with the actual severity of harmful content through risk-aligned optimization during training. FlexGuard enables flexible decision-making via adjustable thresholds tailored to specific strictness levels. We further introduce FlexBench, the first benchmark designed for strictness-adaptive content moderation. Experimental results demonstrate that FlexGuard consistently outperforms state-of-the-art binary classification methods across diverse strictness settings, achieving significant improvements in both accuracy and cross-scenario robustness.
π Abstract
Ensuring the safety of LLM-generated content is essential for real-world deployment. Most existing guardrail models formulate moderation as a fixed binary classification task, implicitly assuming a fixed definition of harmfulness. In practice, enforcement strictness - how conservatively harmfulness is defined and enforced - varies across platforms and evolves over time, making binary moderators brittle under shifting requirements. We first introduce FlexBench, a strictness-adaptive LLM moderation benchmark that enables controlled evaluation under multiple strictness regimes. Experiments on FlexBench reveal substantial cross-strictness inconsistency in existing moderators: models that perform well under one regime can degrade substantially under others, limiting their practical usability. To address this, we propose FlexGuard, an LLM-based moderator that outputs a calibrated continuous risk score reflecting risk severity and supports strictness-specific decisions via thresholding. We train FlexGuard via risk-alignment optimization to improve score-severity consistency and provide practical threshold selection strategies to adapt to target strictness at deployment. Experiments on FlexBench and public benchmarks demonstrate that FlexGuard achieves higher moderation accuracy and substantially improved robustness under varying strictness. We release the source code and data to support reproducibility.