FlexGuard: Continuous Risk Scoring for Strictness-Adaptive LLM Content Moderation

πŸ“… 2026-02-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of existing content moderation approaches for large language models, which predominantly rely on fixed binary classification strategies and struggle to adapt to varying platform policies and temporal shifts in strictness requirements, thereby exhibiting limited robustness. To overcome this, we propose FlexGuardβ€”a moderation system based on continuous risk scoring that aligns risk estimates with the actual severity of harmful content through risk-aligned optimization during training. FlexGuard enables flexible decision-making via adjustable thresholds tailored to specific strictness levels. We further introduce FlexBench, the first benchmark designed for strictness-adaptive content moderation. Experimental results demonstrate that FlexGuard consistently outperforms state-of-the-art binary classification methods across diverse strictness settings, achieving significant improvements in both accuracy and cross-scenario robustness.

Technology Category

Application Category

πŸ“ Abstract
Ensuring the safety of LLM-generated content is essential for real-world deployment. Most existing guardrail models formulate moderation as a fixed binary classification task, implicitly assuming a fixed definition of harmfulness. In practice, enforcement strictness - how conservatively harmfulness is defined and enforced - varies across platforms and evolves over time, making binary moderators brittle under shifting requirements. We first introduce FlexBench, a strictness-adaptive LLM moderation benchmark that enables controlled evaluation under multiple strictness regimes. Experiments on FlexBench reveal substantial cross-strictness inconsistency in existing moderators: models that perform well under one regime can degrade substantially under others, limiting their practical usability. To address this, we propose FlexGuard, an LLM-based moderator that outputs a calibrated continuous risk score reflecting risk severity and supports strictness-specific decisions via thresholding. We train FlexGuard via risk-alignment optimization to improve score-severity consistency and provide practical threshold selection strategies to adapt to target strictness at deployment. Experiments on FlexBench and public benchmarks demonstrate that FlexGuard achieves higher moderation accuracy and substantially improved robustness under varying strictness. We release the source code and data to support reproducibility.
Problem

Research questions and friction points this paper is trying to address.

content moderation
strictness adaptation
LLM safety
risk scoring
moderation robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

continuous risk scoring
strictness-adaptive moderation
FlexGuard
risk-alignment optimization
content moderation benchmark
πŸ”Ž Similar Papers
No similar papers found.