Q-realign: Piggybacking Realignment on Quantization for Safe and Efficient LLM Deployment

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language models (LLMs) to safety alignment degradation during task-specific fine-tuning, a problem exacerbated by the high computational cost and complexity of existing defense mechanisms. To overcome these limitations, we propose an efficient post-hoc defense framework based on post-training quantization that uniquely integrates safety alignment directly into the quantization process. By analyzing representational structures, we reformulate quantization as a dual-objective optimization problem that jointly preserves model compression and safety. This approach decouples safety alignment from fine-tuning and requires no additional training, enabling rapid restoration of model safety at deployment time. Experiments demonstrate that our method significantly suppresses unsafe behaviors across multiple models and datasets while maintaining competitive task performance. Notably, a 7B-parameter model can be processed in under 40 minutes on a single RTX 4090 GPU, substantially reducing both memory footprint and GPU time.

Technology Category

Application Category

📝 Abstract
Public large language models (LLMs) are typically safety-aligned during pretraining, yet task-specific fine-tuning required for deployment often erodes this alignment and introduces safety risks. Existing defenses either embed safety recovery into fine-tuning or rely on fine-tuning-derived priors for post-hoc correction, leaving safety recovery tightly coupled with training and incurring high computational overhead and a complex workflow. To address these challenges, we propose \texttt{Q-realign}, a post-hoc defense method based on post-training quantization, guided by an analysis of representational structure. By reframing quantization as a dual-objective procedure for compression and safety, \texttt{Q-realign} decouples safety alignment from fine-tuning and naturally piggybacks into modern deployment pipelines. Experiments across multiple models and datasets demonstrate that our method substantially reduces unsafe behaviors while preserving task performance, with significant reductions in memory usage and GPU hours. Notably, our approach can recover the safety alignment of a fine-tuned 7B LLM on a single RTX 4090 within 40 minutes. Overall, our work provides a practical, turnkey solution for safety-aware deployment.
Problem

Research questions and friction points this paper is trying to address.

safety alignment
fine-tuning
large language models
deployment
post-hoc correction
Innovation

Methods, ideas, or system contributions that make the work stand out.

quantization
safety alignment
post-hoc defense
LLM deployment
representational structure
🔎 Similar Papers
No similar papers found.