EnchTable: Unified Safety Alignment Transfer in Fine-tuned Large Language Models

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-tuning large language models (LLMs) for specialized domains—such as code generation, biomedical analysis, and mathematical reasoning—often induces systematic degradation of safety alignment, increasing the risk of harmful outputs. To address this, we propose a unified safety alignment transfer framework that decouples safety constraints from task-specific reasoning via NTK-guided safety vector distillation and perturbation-aware fusion. Our method enables lossless, architecture- and scale-agnostic alignment transfer without parameter updates and supports dynamic, inference-time alignment. Evaluated across three domains, three mainstream model architectures, and eleven benchmark datasets, it significantly reduces unsafe output rates—outperforming leading commercial models—while demonstrating strong jailbreak resistance and plug-and-play deployment compatibility.

Technology Category

Application Category

📝 Abstract
Many machine learning models are fine-tuned from large language models (LLMs) to achieve high performance in specialized domains like code generation, biomedical analysis, and mathematical problem solving. However, this fine-tuning process often introduces a critical vulnerability: the systematic degradation of safety alignment, undermining ethical guidelines and increasing the risk of harmful outputs. Addressing this challenge, we introduce EnchTable, a novel framework designed to transfer and maintain safety alignment in downstream LLMs without requiring extensive retraining. EnchTable leverages a Neural Tangent Kernel (NTK)-based safety vector distillation method to decouple safety constraints from task-specific reasoning, ensuring compatibility across diverse model architectures and sizes. Additionally, our interference-aware merging technique effectively balances safety and utility, minimizing performance compromises across various task domains. We implemented a fully functional prototype of EnchTable on three different task domains and three distinct LLM architectures, and evaluated its performance through extensive experiments on eleven diverse datasets, assessing both utility and model safety. Our evaluations include LLMs from different vendors, demonstrating EnchTable's generalization capability. Furthermore, EnchTable exhibits robust resistance to static and dynamic jailbreaking attacks, outperforming vendor-released safety models in mitigating adversarial prompts. Comparative analyses with six parameter modification methods and two inference-time alignment baselines reveal that EnchTable achieves a significantly lower unsafe rate, higher utility score, and universal applicability across different task domains. Additionally, we validate EnchTable can be seamlessly integrated into various deployment pipelines without significant overhead.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning LLMs degrades safety alignment, increasing harmful output risks
Transfer safety alignment to downstream models without extensive retraining
Balance safety and utility across diverse model architectures and tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transfers safety alignment without retraining fine-tuned models
Uses NTK-based distillation to separate safety from task reasoning
Employs interference-aware merging to balance safety and utility
🔎 Similar Papers
No similar papers found.
J
Jialin Wu
Ant Group
Kecen Li
Kecen Li
Institute of Automation, Chinese Academy of Sciences
Data privacyMachine Learning
Zhicong Huang
Zhicong Huang
Ant Group
CryptographySecurity and PrivacyMachine Learning
X
Xinfeng Li
Nanyang Technological University
X
Xiaofeng Wang
Nanyang Technological University
C
Cheng Hong
Ant Group