🤖 AI Summary
Existing evaluations of large language model (LLM) harmfulness lack systematic, standardized benchmarks, limiting the reliability of attack efficacy assessments and risk reporting.
Method: This paper introduces the first comprehensive benchmark platform dedicated to harmfulness measurement and adjudication, featuring high-quality, diverse adversarial prompt–response pairs and a scalable, multi-metric-compatible scoring framework.
Contribution/Results: Empirical analysis reveals that conventional text-matching metrics—METEOR and ROUGE-1—significantly outperform state-of-the-art LLM-based judges in harmfulness classification, challenging the prevailing assumption that LLM judges are inherently more trustworthy. The benchmark enables fine-grained attribution analysis, exposing the critical impact of metric selection on measured risk levels. It establishes a reproducible, verifiable methodological foundation and practical standard for LLM safety evaluation.
📝 Abstract
The alignment of large language models (LLMs) with human values is critical for their safe deployment, yet jailbreak attacks can subvert this alignment to elicit harmful outputs from LLMs. In recent years, a proliferation of jailbreak attacks has emerged, accompanied by diverse metrics and judges to assess the harmfulness of the LLM outputs. However, the absence of a systematic benchmark to assess the quality and effectiveness of these metrics and judges undermines the credibility of the reported jailbreak effectiveness and other risks. To address this gap, we introduce HarmMetric Eval, a comprehensive benchmark designed to support both overall and fine-grained evaluation of harmfulness metrics and judges. Our benchmark includes a high-quality dataset of representative harmful prompts paired with diverse harmful and non-harmful model responses, alongside a flexible scoring mechanism compatible with various metrics and judges. With HarmMetric Eval, our extensive experiments uncover a surprising result: two conventional metrics--METEOR and ROUGE-1--outperform LLM-based judges in evaluating the harmfulness of model responses, challenging prevailing beliefs about LLMs' superiority in this domain. Our dataset is publicly available at https://huggingface.co/datasets/qusgo/HarmMetric_Eval, and the code is available at https://anonymous.4open.science/r/HarmMetric-Eval-4CBE.