🤖 AI Summary
Current safety evaluations of large language models focus solely on the severity of potential harms while neglecting the feasibility of executing attacks, leading to distorted risk assessments. This work proposes an “expected harm” metric that integrates harm severity with execution likelihood by modeling the cost of carrying out attacks, thereby establishing a more comprehensive safety evaluation framework. We identify and name a novel phenomenon—“inverse risk calibration”—where models exhibit unexpected vulnerability to high-probability, low-severity attacks. We demonstrate that this vulnerability stems from the model’s internal failure to represent execution costs. Experiments show that leveraging this phenomenon can increase jailbreak success rates by up to twofold. Further analysis using linear probes confirms that models encode only harm severity and remain insensitive to execution costs.
📝 Abstract
Current evaluations of LLM safety predominantly rely on severity-based taxonomies to assess the harmfulness of malicious queries. We argue that this formulation requires re-examination as it assumes uniform risk across all malicious queries, neglecting Execution Likelihood--the conditional probability of a threat being realized given the model's response. In this work, we introduce Expected Harm, a metric that weights the severity of a jailbreak by its execution likelihood, modeled as a function of execution cost. Through empirical analysis of state-of-the-art models, we reveal a systematic Inverse Risk Calibration: models disproportionately exhibit stronger refusal behaviors for low-likelihood (high-cost) threats while remaining vulnerable to high-likelihood (low-cost) queries. We demonstrate that this miscalibration creates a structural vulnerability: by exploiting this property, we increase the attack success rate of existing jailbreaks by up to $2\times$. Finally, we trace the root cause of this failure using linear probing, which reveals that while models encode severity in their latent space to drive refusal decisions, they possess no distinguishable internal representation of execution cost, making them"blind"to this critical dimension of risk.