🤖 AI Summary
Quantization of large language models (LLMs) for deployment on resource-constrained devices is increasingly common, yet emerging calibration-free quantization methods often severely compromise model safety—necessitating systematic evaluation and effective mitigation. This work presents the first comprehensive, multi-dimensional safety assessment of mainstream quantization techniques and calibration-free approaches, revealing widespread degradation in safety capabilities. To address this, we propose Q-resafe, a quantization-aware safety repair framework that integrates lightweight fine-tuning with safety-aligned optimization, precisely patching quantized models’ vulnerabilities without sacrificing utility. Experiments demonstrate that Q-resafe restores quantized LLMs’ safety performance to near-original levels—achieving an average 42.6% improvement—and maintains robustness under strong adversarial evaluation. Our approach establishes a scalable, co-optimization paradigm for quantization and safety, enabling trustworthy LLM deployment at the edge.
📝 Abstract
Quantized large language models (LLMs) have gained increasing attention and significance for enabling deployment in resource-constrained environments. However, emerging studies on a few calibration dataset-free quantization methods suggest that quantization may compromise the safety capabilities of LLMs, underscoring the urgent need for systematic safety evaluations and effective mitigation strategies. In this paper, we present comprehensive safety evaluations across various mainstream quantization techniques and diverse calibration datasets, utilizing widely accepted safety benchmarks. To address the identified safety vulnerabilities, we propose a quantization-aware safety patching framework, Q-resafe, to efficiently restore the safety capabilities of quantized LLMs while minimizing any adverse impact on utility. Extensive experimental results demonstrate that Q-resafe successfully re-aligns the safety of quantized LLMs with their pre-quantization counterparts, even under challenging evaluation scenarios. Project page is available at: https://github.com/Thecommonirin/Qresafe.