Investigating Adversarial Trigger Transfer in Large Language Models

📅 2024-04-24
📈 Citations: 6
Influential: 1
📄 PDF
🤖 AI Summary
This study systematically investigates the cross-model transferability of adversarial triggers in large language models (LLMs) and examines how different alignment paradigms—preference optimization (APO) versus supervised fine-tuning (AFT)—affect robustness against such attacks. Using adversarial optimization to generate triggers, extensive cross-model transfer experiments, and multi-dimensional safety evaluation across 13 open-source LLMs and five hazardous instruction categories, we empirically demonstrate that adversarial triggers exhibit low and unstable transfer success rates (>90% failure), yet achieve strong generalization upon successful transfer—activating unseen dangerous instructions across domains. Crucially, APO-aligned models show exceptional resistance to jailbreaking—even under white-box conditions—whereas AFT-aligned models, despite apparent safety, are highly vulnerable. Our core contribution is identifying alignment methodology as the decisive factor governing adversarial robustness in LLMs, thereby correcting the widespread misconception that adversarial triggers possess high cross-model transferability.

Technology Category

Application Category

📝 Abstract
Recent work has developed optimization procedures to find token sequences, called adversarial triggers, which can elicit unsafe responses from aligned language models. These triggers are believed to be highly transferable, i.e., a trigger optimized on one model can jailbreak other models. In this paper, we concretely show that such adversarial triggers are not consistently transferable. We extensively investigate trigger transfer amongst 13 open models and observe poor and inconsistent transfer. Our experiments further reveal a significant difference in robustness to adversarial triggers between models Aligned by Preference Optimization (APO) and models Aligned by Fine-Tuning (AFT). We find that APO models are extremely hard to jailbreak even when the trigger is optimized directly on the model. On the other hand, while AFT models may appear safe on the surface, exhibiting refusals to a range of unsafe instructions, we show that they are highly susceptible to adversarial triggers. Lastly, we observe that most triggers optimized on AFT models also generalize to new unsafe instructions from five diverse domains, further emphasizing their vulnerability. Overall, our work highlights the need for more comprehensive safety evaluations for aligned language models.
Problem

Research questions and friction points this paper is trying to address.

Assessing adversarial trigger transferability across large language models
Comparing robustness of APO and AFT models to adversarial triggers
Evaluating vulnerability of AFT models to generalized unsafe instructions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Investigates adversarial trigger transfer in models
Compares APO and AFT model robustness
Tests triggers on diverse unsafe instructions
🔎 Similar Papers
No similar papers found.