🤖 AI Summary
Fine-tuning large language models (LLMs) for specialized domains—such as code generation, biomedical analysis, and mathematical reasoning—often induces systematic degradation of safety alignment, increasing the risk of harmful outputs. To address this, we propose a unified safety alignment transfer framework that decouples safety constraints from task-specific reasoning via NTK-guided safety vector distillation and perturbation-aware fusion. Our method enables lossless, architecture- and scale-agnostic alignment transfer without parameter updates and supports dynamic, inference-time alignment. Evaluated across three domains, three mainstream model architectures, and eleven benchmark datasets, it significantly reduces unsafe output rates—outperforming leading commercial models—while demonstrating strong jailbreak resistance and plug-and-play deployment compatibility.
📝 Abstract
Many machine learning models are fine-tuned from large language models (LLMs) to achieve high performance in specialized domains like code generation, biomedical analysis, and mathematical problem solving. However, this fine-tuning process often introduces a critical vulnerability: the systematic degradation of safety alignment, undermining ethical guidelines and increasing the risk of harmful outputs. Addressing this challenge, we introduce EnchTable, a novel framework designed to transfer and maintain safety alignment in downstream LLMs without requiring extensive retraining. EnchTable leverages a Neural Tangent Kernel (NTK)-based safety vector distillation method to decouple safety constraints from task-specific reasoning, ensuring compatibility across diverse model architectures and sizes. Additionally, our interference-aware merging technique effectively balances safety and utility, minimizing performance compromises across various task domains. We implemented a fully functional prototype of EnchTable on three different task domains and three distinct LLM architectures, and evaluated its performance through extensive experiments on eleven diverse datasets, assessing both utility and model safety. Our evaluations include LLMs from different vendors, demonstrating EnchTable's generalization capability. Furthermore, EnchTable exhibits robust resistance to static and dynamic jailbreaking attacks, outperforming vendor-released safety models in mitigating adversarial prompts. Comparative analyses with six parameter modification methods and two inference-time alignment baselines reveal that EnchTable achieves a significantly lower unsafe rate, higher utility score, and universal applicability across different task domains. Additionally, we validate EnchTable can be seamlessly integrated into various deployment pipelines without significant overhead.