Q-resafe: Assessing Safety Risks and Quantization-aware Safety Patching for Quantized Large Language Models

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantization of large language models (LLMs) for deployment on resource-constrained devices is increasingly common, yet emerging calibration-free quantization methods often severely compromise model safety—necessitating systematic evaluation and effective mitigation. This work presents the first comprehensive, multi-dimensional safety assessment of mainstream quantization techniques and calibration-free approaches, revealing widespread degradation in safety capabilities. To address this, we propose Q-resafe, a quantization-aware safety repair framework that integrates lightweight fine-tuning with safety-aligned optimization, precisely patching quantized models’ vulnerabilities without sacrificing utility. Experiments demonstrate that Q-resafe restores quantized LLMs’ safety performance to near-original levels—achieving an average 42.6% improvement—and maintains robustness under strong adversarial evaluation. Our approach establishes a scalable, co-optimization paradigm for quantization and safety, enabling trustworthy LLM deployment at the edge.

Technology Category

Application Category

📝 Abstract
Quantized large language models (LLMs) have gained increasing attention and significance for enabling deployment in resource-constrained environments. However, emerging studies on a few calibration dataset-free quantization methods suggest that quantization may compromise the safety capabilities of LLMs, underscoring the urgent need for systematic safety evaluations and effective mitigation strategies. In this paper, we present comprehensive safety evaluations across various mainstream quantization techniques and diverse calibration datasets, utilizing widely accepted safety benchmarks. To address the identified safety vulnerabilities, we propose a quantization-aware safety patching framework, Q-resafe, to efficiently restore the safety capabilities of quantized LLMs while minimizing any adverse impact on utility. Extensive experimental results demonstrate that Q-resafe successfully re-aligns the safety of quantized LLMs with their pre-quantization counterparts, even under challenging evaluation scenarios. Project page is available at: https://github.com/Thecommonirin/Qresafe.
Problem

Research questions and friction points this paper is trying to address.

Assessing safety risks in quantized large language models
Evaluating safety across quantization techniques and datasets
Developing safety patching for quantized LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive safety evaluations for quantized LLMs
Quantization-aware safety patching framework Q-resafe
Restores safety with minimal utility impact
🔎 Similar Papers
No similar papers found.
Kejia Chen
Kejia Chen
Technical University of Munich
Manipulation of Deformable ObjectsMulti-robot CollaborationLLM-based Planning
Jiawen Zhang
Jiawen Zhang
The Hong Kong University of Science and Technology
Time SeriesKnowledge GraphAIHCI
J
Jiacong Hu
The State Key Laboratory of Blockchain and Data Security, Zhejiang University
Y
Yu Wang
The State Key Laboratory of Blockchain and Data Security, Zhejiang University
J
Jian Lou
Sun Yat-sen University
Z
Zunlei Feng
The State Key Laboratory of Blockchain and Data Security, Zhejiang University
M
Mingli Song
The State Key Laboratory of Blockchain and Data Security, Zhejiang University