LLM-VA: Resolving the Jailbreak-Overrefusal Trade-off via Vector Alignment

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models often face a trade-off between "jailbreaking" vulnerabilities and excessive refusal in safety alignment. This work reveals, for the first time, that this dilemma stems from the near-orthogonality between response intent vectors and safety judgment vectors. To address this, the authors propose a vector alignment method that requires neither fine-tuning nor architectural modifications: by employing support vector machines (SVMs) to identify critical vectors across layers, selecting safety-relevant layers, and applying a minimum-norm closed-form weight update to align the two vector spaces. Experiments across twelve mainstream large language models demonstrate that the method improves the F1 score by 11.45% over the best baseline while preserving 95.92% of model utility, effectively mitigating the tension between safety and usefulness.

Technology Category

Application Category

📝 Abstract
Safety-aligned LLMs suffer from two failure modes: jailbreak (answering harmful inputs) and over-refusal (declining benign queries). Existing vector steering methods adjust the magnitude of answer vectors, but this creates a fundamental trade-off -- reducing jailbreak increases over-refusal and vice versa. We identify the root cause: LLMs encode the decision to answer (answer vector $v_a$) and the judgment of input safety (benign vector $v_b$) as nearly orthogonal directions, treating them as independent processes. We propose LLM-VA, which aligns $v_a$ with $v_b$ through closed-form weight updates, making the model's willingness to answer causally dependent on its safety assessment -- without fine-tuning or architectural changes. Our method identifies vectors at each layer using SVMs, selects safety-relevant layers, and iteratively aligns vectors via minimum-norm weight modifications. Experiments on 12 LLMs demonstrate that LLM-VA achieves 11.45% higher F1 than the best baseline while preserving 95.92% utility, and automatically adapts to each model's safety bias without manual tuning. Code and models are available at https://hotbento.github.io/LLM-VA-Web/.
Problem

Research questions and friction points this paper is trying to address.

jailbreak
over-refusal
safety alignment
vector steering
LLM safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

vector alignment
safety alignment
over-refusal
jailbreak
closed-form weight update
🔎 Similar Papers
No similar papers found.