Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM risk assessment relies solely on input-triggered probes, yielding only lower bounds on model capabilities and harms. This work introduces Model Tampering Attacks (MTAs), a novel paradigm that directly manipulates internal model representations—via hidden-state editing, gradient-guided weight perturbations, and cross-layer control-flow injection—to bypass input-space constraints and probe upper bounds of both capability and risk. We establish the first systematic MTA framework, revealing that model vulnerabilities concentrate in low-dimensional subspaces. We further prove that tampering success rates can conservatively predict the efficacy of corresponding input-based attacks. Crucially, we demonstrate that mainstream harm-reduction and unlearning methods are fully reversed within just 16 fine-tuning steps. To enable rigorous, reproducible evaluation, we release the first open, benchmarked MTA suite on Hugging Face—significantly enhancing assessment rigor and upper-bound characterization of LLM risks and capabilities.

Technology Category

Application Category

📝 Abstract
Evaluations of large language model (LLM) risks and capabilities are increasingly being incorporated into AI risk management and governance frameworks. Currently, most risk evaluations are conducted by designing inputs that elicit harmful behaviors from the system. However, a fundamental limitation of this approach is that the harmfulness of the behaviors identified during any particular evaluation can only lower bound the model's worst-possible-case behavior. As a complementary method for eliciting harmful behaviors, we propose evaluating LLMs with model tampering attacks which allow for modifications to latent activations or weights. We pit state-of-the-art techniques for removing harmful LLM capabilities against a suite of 5 input-space and 6 model tampering attacks. In addition to benchmarking these methods against each other, we show that (1) model resilience to capability elicitation attacks lies on a low-dimensional robustness subspace; (2) the attack success rate of model tampering attacks can empirically predict and offer conservative estimates for the success of held-out input-space attacks; and (3) state-of-the-art unlearning methods can easily be undone within 16 steps of fine-tuning. Together these results highlight the difficulty of removing harmful LLM capabilities and show that model tampering attacks enable substantially more rigorous evaluations than input-space attacks alone. We release models at https://huggingface.co/LLM-GAT
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLM risks using model tampering
Assess resilience to capability elicitation attacks
Challenge state-of-the-art unlearning methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model tampering enhances LLM evaluations
Latent activations modification for rigorous testing
Unlearning methods vulnerability to fine-tuning
🔎 Similar Papers
No similar papers found.