🤖 AI Summary
Open-source large language models (LLMs) are vulnerable to malicious fine-tuning attacks, leading to harmful content generation. To address this, we propose a hypernetwork-driven bilevel adversarial training framework that enhances weight-level robustness while preserving general capabilities. Our method dynamically defends against adversarial LoRA weights—generated by red-teaming—within the model’s internal activation space, integrating latent-space manipulation with weight-space attack modeling for fine-grained safety enforcement. Experiments demonstrate up to a 27.4% improvement in resistance across 52 malicious fine-tuning attacks, with less than 0.5% degradation on major benchmarks (MMLU, HellaSwag, GSM8K), significantly outperforming existing baselines. Our key contribution is the first incorporation of hypernetworks into bilevel adversarial optimization, enabling efficient and balanced co-optimization of safety and capability.
📝 Abstract
The release of open-weight large language models (LLMs) creates a tension between advancing accessible research and preventing misuse, such as malicious fine-tuning to elicit harmful content. Current safety measures struggle to preserve the general capabilities of the LLM while resisting a determined adversary with full access to the model's weights and architecture, who can use full-parameter fine-tuning to erase existing safeguards. To address this, we introduce AntiDote, a bi-level optimization procedure for training LLMs to be resistant to such tampering. AntiDote involves an auxiliary adversary hypernetwork that learns to generate malicious Low-Rank Adaptation (LoRA) weights conditioned on the defender model's internal activations. The defender LLM is then trained with an objective to nullify the effect of these adversarial weight additions, forcing it to maintain its safety alignment. We validate this approach against a diverse suite of 52 red-teaming attacks, including jailbreak prompting, latent space manipulation, and direct weight-space attacks. AntiDote is upto 27.4% more robust against adversarial attacks compared to both tamper-resistance and unlearning baselines. Crucially, this robustness is achieved with a minimal trade-off in utility, incurring a performance degradation of upto less than 0.5% across capability benchmarks including MMLU, HellaSwag, and GSM8K. Our work offers a practical and compute efficient methodology for building open-weight models where safety is a more integral and resilient property.