Expected Harm: Rethinking Safety Evaluation of (Mis)Aligned LLMs

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current safety evaluations of large language models focus solely on the severity of potential harms while neglecting the feasibility of executing attacks, leading to distorted risk assessments. This work proposes an “expected harm” metric that integrates harm severity with execution likelihood by modeling the cost of carrying out attacks, thereby establishing a more comprehensive safety evaluation framework. We identify and name a novel phenomenon—“inverse risk calibration”—where models exhibit unexpected vulnerability to high-probability, low-severity attacks. We demonstrate that this vulnerability stems from the model’s internal failure to represent execution costs. Experiments show that leveraging this phenomenon can increase jailbreak success rates by up to twofold. Further analysis using linear probes confirms that models encode only harm severity and remain insensitive to execution costs.

Technology Category

Application Category

📝 Abstract
Current evaluations of LLM safety predominantly rely on severity-based taxonomies to assess the harmfulness of malicious queries. We argue that this formulation requires re-examination as it assumes uniform risk across all malicious queries, neglecting Execution Likelihood--the conditional probability of a threat being realized given the model's response. In this work, we introduce Expected Harm, a metric that weights the severity of a jailbreak by its execution likelihood, modeled as a function of execution cost. Through empirical analysis of state-of-the-art models, we reveal a systematic Inverse Risk Calibration: models disproportionately exhibit stronger refusal behaviors for low-likelihood (high-cost) threats while remaining vulnerable to high-likelihood (low-cost) queries. We demonstrate that this miscalibration creates a structural vulnerability: by exploiting this property, we increase the attack success rate of existing jailbreaks by up to $2\times$. Finally, we trace the root cause of this failure using linear probing, which reveals that while models encode severity in their latent space to drive refusal decisions, they possess no distinguishable internal representation of execution cost, making them"blind"to this critical dimension of risk.
Problem

Research questions and friction points this paper is trying to address.

LLM safety
Expected Harm
Execution Likelihood
Risk Calibration
Jailbreak
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expected Harm
Execution Likelihood
Inverse Risk Calibration
Jailbreak Vulnerability
Linear Probing
🔎 Similar Papers
No similar papers found.
Y
Yen-Shan Chen
National Taiwan University, Taipei, Taiwan
Zhi Rui Tam
Zhi Rui Tam
NTU / Appier
natural language processing
C
Cheng-Kuang Wu
Independent Researcher
Y
Yun-Nung Chen
National Taiwan University, Taipei, Taiwan