How Safe is Your Safety Metric? Automatic Concatenation Tests for Metric Reliability

📅 2024-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes a critical robustness deficiency in large language model (LLM) safety evaluation metrics: their scores exhibit significant reversal under prompt-response concatenation or reordering—e.g., a harmful sample receives a high unsafe score in isolation but a low unsafe score when concatenated with benign content, enabling evasion of safety filters. To address this, the authors formally define two novel robustness failure modes—“concatenation instability” and “order sensitivity”—and propose the first automated black-box robustness evaluation framework tailored for safety evaluators. The framework supports input permutation, dynamic concatenation, and consistency verification, and is compatible with mainstream LLM-based judges (e.g., GPT-based judges). Experiments across state-of-the-art metrics reveal concatenation reversal rates up to 73%, underscoring substantial deployment risks. This work shifts safety evaluation paradigms from isolated, static single-sample scoring toward structured, scalable, and robustness-aware validation.

Technology Category

Application Category

📝 Abstract
Consider a scenario where a harmfulness evaluation metric intended to filter unsafe responses from a Large Language Model. When applied to individual harmful prompt-response pairs, it correctly flags them as unsafe by assigning a high-risk score. Yet, if those same pairs are concatenated, the metrics decision unexpectedly reverses - labelling the combined content as safe with a low score, allowing the harmful text to bypass the filter. We found that multiple safety metrics, including advanced metrics such as GPT-based judges, exhibit this non-safe behaviour. Moreover, they show a strong sensitivity to input order: responses are often classified as safe if safe content appears first, regardless of any harmful content that follows, and vice versa. These findings underscore the importance of evaluating the safety of safety metrics, that is, the reliability of their output scores. To address this, we developed general, automatic, concatenation-based tests to assess key properties of these metrics. When applied in a model safety scenario, the tests revealed significant inconsistencies in harmfulness evaluations.
Problem

Research questions and friction points this paper is trying to address.

Evaluates safety metric reliability
Tests metric consistency with concatenation
Identifies input order sensitivity issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatic concatenation tests
Metric reliability assessment
Safety metric evaluation
🔎 Similar Papers