🤖 AI Summary
This work addresses the vulnerability of large language models to alignment drift during fine-tuning, wherein even benign training data can inadvertently degrade their ability to refuse harmful requests. To mitigate this issue, the authors propose the PACT framework, which for the first time focuses alignment preservation on safety-critical tokens. By leveraging a reference model, PACT identifies these key tokens and selectively regularizes their output confidence to remain consistent with the original aligned model, while allowing non-safety-related tokens to update freely for downstream task adaptation. This approach enforces localized regularization rather than global intervention, effectively curbing alignment drift without compromising task performance, thereby significantly enhancing model safety.
📝 Abstract
Large language models (LLMs) often require fine-tuning (FT) to perform well on downstream tasks, but FT can induce safety-alignment drift even when the training dataset contains only benign data. Prior work shows that introducing a small fraction of harmful data can substantially compromise LLM refusal behavior, causing LLMs to comply with harmful requests. Existing defense methods often rely on model-wide interventions, such as restricting which parameters are updated or injecting additional safety data, which can limit generality and degrade downstream task performance. To address these limitations, we propose a fine-tuning framework called Preserving Safety Alignment via Constrained Tokens (PACT), which stabilizes the model's confidence on safety tokens. Our approach is motivated by the empirical observation that safety-aligned behavior is reflected in the model's token-level output confidence and is often concentrated on a small subset of safety-related tokens. During downstream fine-tuning, we regularize the fine-tuned model to match the aligned reference model's confidence on safety-related tokens at each response step, while leaving non-safety tokens largely unconstrained to allow effective task adaptation. This targeted constraint prevents alignment drift without imposing global restrictions that typically trade off with model utility.