How Quantization Impacts Privacy Risk on LLMs for Code?

📅 2025-07-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the impact of model quantization on privacy risks in code large language models (LLMs4Code). We systematically apply static and dynamic quantization across three model families—Pythia, CodeGen, and GPT-Neo—and evaluate privacy leakage via multiple membership inference attacks in cross-architecture and cross-scale settings. Our empirical analysis reveals, for the first time, that quantization significantly reduces membership inference attack success rates—by an average of 32.7%—without compromising task performance: code completion accuracy varies by less than 1.2%. Crucially, we find that quantization strength exhibits a negative correlation with privacy risk but negligible correlation with model utility, challenging the conventional assumption that model compression inherently degrades privacy. These findings establish “quantization as privacy enhancement” as a novel paradigm, providing both theoretical foundations and practical guidelines for secure, efficient deployment of lightweight code LMs.

Technology Category

Application Category

📝 Abstract
Large language models for code (LLMs4Code) rely heavily on massive training data, including sensitive data, such as cloud service credentials of the projects and personal identifiable information of the developers, raising serious privacy concerns. Membership inference (MI) has recently emerged as an effective tool for assessing privacy risk by identifying whether specific data belong to a model's training set. In parallel, model compression techniques, especially quantization, have gained traction for reducing computational costs and enabling the deployment of large models. However, while quantized models still retain knowledge learned from the original training data, it remains unclear whether quantization affects their ability to retain and expose privacy information. Answering this question is of great importance to understanding privacy risks in real-world deployments. In this work, we conduct the first empirical study on how quantization influences task performance and privacy risk simultaneously in LLMs4Code. To do this, we implement widely used quantization techniques (static and dynamic) to three representative model families, namely Pythia, CodeGen, and GPTNeo. Our results demonstrate that quantization has a significant impact on reducing the privacy risk relative to the original model. We also uncover a positive correlation between task performance and privacy risk, indicating an underlying tradeoff. Moreover, we reveal the possibility that quantizing larger models could yield better balance than using full-precision small models. Finally, we demonstrate that these findings generalize across different architectures, model sizes and MI methods, offering practical guidance for safeguarding privacy when deploying compressed LLMs4Code.
Problem

Research questions and friction points this paper is trying to address.

How quantization affects privacy risk in LLMs for code
Impact of quantization on task performance and privacy risk
Tradeoff between task performance and privacy risk in quantization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantization reduces privacy risk in LLMs4Code
Task performance and privacy risk correlate positively
Quantized large models balance performance and privacy
M
Md Nazmul Haque
North Carolina State University
Hua Yang
Hua Yang
Redrock Biometrics
BiometricsMotion TrackingComputer VisionAugmented RealityImage Processing
Z
Zhou Yang
University of Alberta
B
Bowen Xu
North Carolina State University