HarmMetric Eval: Benchmarking Metrics and Judges for LLM Harmfulness Assessment

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of large language model (LLM) harmfulness lack systematic, standardized benchmarks, limiting the reliability of attack efficacy assessments and risk reporting. Method: This paper introduces the first comprehensive benchmark platform dedicated to harmfulness measurement and adjudication, featuring high-quality, diverse adversarial prompt–response pairs and a scalable, multi-metric-compatible scoring framework. Contribution/Results: Empirical analysis reveals that conventional text-matching metrics—METEOR and ROUGE-1—significantly outperform state-of-the-art LLM-based judges in harmfulness classification, challenging the prevailing assumption that LLM judges are inherently more trustworthy. The benchmark enables fine-grained attribution analysis, exposing the critical impact of metric selection on measured risk levels. It establishes a reproducible, verifiable methodological foundation and practical standard for LLM safety evaluation.

Technology Category

Application Category

📝 Abstract
The alignment of large language models (LLMs) with human values is critical for their safe deployment, yet jailbreak attacks can subvert this alignment to elicit harmful outputs from LLMs. In recent years, a proliferation of jailbreak attacks has emerged, accompanied by diverse metrics and judges to assess the harmfulness of the LLM outputs. However, the absence of a systematic benchmark to assess the quality and effectiveness of these metrics and judges undermines the credibility of the reported jailbreak effectiveness and other risks. To address this gap, we introduce HarmMetric Eval, a comprehensive benchmark designed to support both overall and fine-grained evaluation of harmfulness metrics and judges. Our benchmark includes a high-quality dataset of representative harmful prompts paired with diverse harmful and non-harmful model responses, alongside a flexible scoring mechanism compatible with various metrics and judges. With HarmMetric Eval, our extensive experiments uncover a surprising result: two conventional metrics--METEOR and ROUGE-1--outperform LLM-based judges in evaluating the harmfulness of model responses, challenging prevailing beliefs about LLMs' superiority in this domain. Our dataset is publicly available at https://huggingface.co/datasets/qusgo/HarmMetric_Eval, and the code is available at https://anonymous.4open.science/r/HarmMetric-Eval-4CBE.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking metrics and judges for LLM harmfulness assessment
Evaluating quality of metrics measuring harmful model outputs
Assessing effectiveness of jailbreak attack evaluation methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

HarmMetric Eval benchmark for harmfulness metrics evaluation
Dataset with harmful prompts and diverse model responses
Flexible scoring mechanism compatible with various metrics
🔎 Similar Papers
No similar papers found.
L
Langqi Yang
The State Key Laboratory of Blockchain and Data Security, Zhejiang University; Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security
Tianhang Zheng
Tianhang Zheng
Zhejiang University
K
Kedong Xiu
The State Key Laboratory of Blockchain and Data Security, Zhejiang University; Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security
Yixuan Chen
Yixuan Chen
Oxford Suzhou Center for Advanced Research
DisentanglementVision-Language ModelAI for Medical
D
Di Wang
The State Key Laboratory of Blockchain and Data Security, Zhejiang University; Hangzhou High-Tech Zone (Bin滨江) Institute of Blockchain and Data Security
P
Puning Zhao
The State Key Laboratory of Blockchain and Data Security, Zhejiang University; Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security
Zhan Qin
Zhan Qin
Researcher, Zhejiang University
Data Security and PrivacyAI Security
Kui Ren
Kui Ren
Professor and Dean of Computer Science, Zhejiang University, ACM/IEEE Fellow
Data Security & PrivacyAI SecurityIoT & Vehicular Security