🤖 AI Summary
This paper addresses the pervasive miscalibration problem in LLM-driven guardrail models for content moderation—particularly their excessive overconfidence under jailbreak attacks and poor robustness to multi-response generation. We propose, for the first time, a validation-set-free contextual calibration method that integrates temperature scaling with dual-path empirical evaluation (assessing both user inputs and model outputs) across multiple benchmarks. We evaluate nine mainstream guardrail models on twelve diverse benchmarks. Experimental results demonstrate that our method significantly reduces Expected Calibration Error (ECE) by up to 62% and substantially improves robustness against jailbreak attacks. Moreover, we introduce confidence reliability as a mandatory evaluation criterion for guardrail model deployment—establishing a novel, trust-oriented paradigm for responsible content moderation.
📝 Abstract
Large language models (LLMs) pose significant risks due to the potential for generating harmful content or users attempting to evade guardrails. Existing studies have developed LLM-based guard models designed to moderate the input and output of threat LLMs, ensuring adherence to safety policies by blocking content that violates these protocols upon deployment. However, limited attention has been given to the reliability and calibration of such guard models. In this work, we empirically conduct comprehensive investigations of confidence calibration for 9 existing LLM-based guard models on 12 benchmarks in both user input and model output classification. Our findings reveal that current LLM-based guard models tend to 1) produce overconfident predictions, 2) exhibit significant miscalibration when subjected to jailbreak attacks, and 3) demonstrate limited robustness to the outputs generated by different types of response models. Additionally, we assess the effectiveness of post-hoc calibration methods to mitigate miscalibration. We demonstrate the efficacy of temperature scaling and, for the first time, highlight the benefits of contextual calibration for confidence calibration of guard models, particularly in the absence of validation sets. Our analysis and experiments underscore the limitations of current LLM-based guard models and provide valuable insights for the future development of well-calibrated guard models toward more reliable content moderation. We also advocate for incorporating reliability evaluation of confidence calibration when releasing future LLM-based guard models.