🤖 AI Summary
This work addresses the challenge of transferring model-specific alignment data across large language models (LLMs) in Trojan attack mitigation. We propose a zero-shot cross-model Trojan mitigation framework. Our method introduces a novel “alignment knowledge transfer” paradigm that generalizes without requiring alignment data from the target LLM. It employs an architecture-agnostic, scale-invariant, and permutation-symmetric LoRA weight generator that fuses localized activation features from multiple LLMs to achieve cross-model alignment. A memory-efficient parameter-sharing mechanism further enhances scalability. Evaluated on multiple LLM Trojan benchmarks, our approach significantly reduces attack success rates while preserving benign task performance. It adapts to billion-parameter models with minimal computational overhead—requiring only inference-time adaptation—thus overcoming the strong data- and model-coupling constraints inherent in conventional fine-tuning-based mitigation methods.
📝 Abstract
Mitigating Trojans in Large Language Models (LLMs) is one of many tasks where alignment data is LLM specific, as different LLMs have different Trojan triggers and trigger behaviors to be removed. In this paper, we introduce TeleLoRA (Teleporting Low-Rank Adaptation), a novel framework that synergizes model-specific alignment data across multiple LLMs to enable zero-shot Trojan mitigation on unseen LLMs without alignment data. TeleLoRA learns a unified generator of LoRA adapter weights by leveraging local activation information across multiple LLMs. This generator is designed to be permutation symmetric to generalize across models with different architectures and sizes. We optimize the model design for memory efficiency, making it feasible to learn with large-scale LLMs with minimal computational resources. Experiments on LLM Trojan mitigation benchmarks demonstrate that TeleLoRA effectively reduces attack success rates while preserving the benign performance of the models.