TamperBench: Systematically Stress-Testing LLM Safety Under Fine-Tuning and Tampering

📅 2026-02-06
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of a unified benchmark for evaluating the safety robustness of large language models (LLMs) under fine-tuning and adversarial tampering, which hinders systematic comparison of their safety, utility, and resilience. We propose the first comprehensive and reproducible evaluation framework for LLM tamper resistance, integrating attacks in both weight space (e.g., jailbreak-tuning) and latent representation space, coupled with systematic hyperparameter sweeps and alignment-based defense mechanisms such as Triplet. The framework introduces standardized metrics for safety and capability assessment. Evaluations across 21 open-source LLMs under nine tampering threats reveal that jailbreak-tuning is the most destructive attack vector, Triplet emerges as the most effective defense, and post-training stages critically influence model robustness against tampering.

Technology Category

Application Category

📝 Abstract
As increasingly capable open-weight large language models (LLMs) are deployed, improving their tamper resistance against unsafe modifications, whether accidental or intentional, becomes critical to minimize risks. However, there is no standard approach to evaluate tamper resistance. Varied data sets, metrics, and tampering configurations make it difficult to compare safety, utility, and robustness across different models and defenses. To this end, we introduce TamperBench, the first unified framework to systematically evaluate the tamper resistance of LLMs. TamperBench (i) curates a repository of state-of-the-art weight-space fine-tuning attacks and latent-space representation attacks; (ii) enables realistic adversarial evaluation through systematic hyperparameter sweeps per attack-model pair; and (iii) provides both safety and utility evaluations. TamperBench requires minimal additional code to specify any fine-tuning configuration, alignment-stage defense method, and metric suite while ensuring end-to-end reproducibility. We use TamperBench to evaluate 21 open-weight LLMs, including defense-augmented variants, across nine tampering threats using standardized safety and capability metrics with hyperparameter sweeps per model-attack pair. This yields novel insights, including effects of post-training on tamper resistance, that jailbreak-tuning is typically the most severe attack, and that Triplet emerges as a leading alignment-stage defense. Code is available at: https://github.com/criticalml-uw/TamperBench
Problem

Research questions and friction points this paper is trying to address.

tamper resistance
large language models
fine-tuning
safety evaluation
adversarial attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

tamper resistance
fine-tuning attacks
adversarial evaluation
alignment-stage defense
systematic benchmarking
🔎 Similar Papers
2024-08-01arXiv.orgCitations: 20