Stop Tracking Me! Proactive Defense Against Attribute Inference Attack in LLMs

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language models (LLMs) to inferring sensitive user attributes from textual inputs, a risk inadequately mitigated by existing anonymization techniques due to their coarse granularity. To counter such inference attacks, the authors propose TRACE-RPS, a unified defense framework that uniquely integrates attention mechanism–driven word-level privacy element identification with a lightweight two-stage optimization strategy. The TRACE component precisely tracks privacy-relevant cues in input text, while the RPS module steers the model to abstain from making sensitive inferences, thereby severing the attribute inference chain at its root. Experimental evaluations across multiple open-source LLMs demonstrate that TRACE-RPS reduces attribute inference accuracy from approximately 50% to below 5%, while maintaining strong cross-model generalizability, prompt robustness, and an effective utility–privacy trade-off.

Technology Category

Application Category

📝 Abstract
Recent studies have shown that large language models (LLMs) can infer private user attributes (e.g., age, location, gender) from user-generated text shared online, enabling rapid and large-scale privacy breaches. Existing anonymization-based defenses are coarse-grained, lacking word-level precision in anonymizing privacy-leaking elements. Moreover, they are inherently limited as altering user text to hide sensitive cues still allows attribute inference to occur through models'reasoning capabilities. To address these limitations, we propose a unified defense framework that combines fine-grained anonymization (TRACE) with inference-preventing optimization (RPS). TRACE leverages attention mechanisms and inference chain generation to identify and anonymize privacy-leaking textual elements, while RPS employs a lightweight two-stage optimization strategy to induce model rejection behaviors, thereby preventing attribute inference. Evaluations across diverse LLMs show that TRACE-RPS reduces attribute inference accuracy from around 50\% to below 5\% on open-source models. In addition, our approach offers strong cross-model generalization, prompt-variation robustness, and utility-privacy tradeoffs. Our code is available at https://github.com/Jasper-Yan/TRACE-RPS.
Problem

Research questions and friction points this paper is trying to address.

attribute inference attack
privacy leakage
large language models
anonymization
user attributes
Innovation

Methods, ideas, or system contributions that make the work stand out.

attribute inference attack
fine-grained anonymization
inference-preventing optimization
large language models
privacy defense
🔎 Similar Papers
No similar papers found.
Dong Yan
Dong Yan
AI Chief Expert, Bosch.
Reinforcement LearningFoundation Model
Jian Liang
Jian Liang
Kuaishou Inc.
transfer learninggraph learning
R
Ran He
School of Artificial Intelligence, University of Chinese Academy of Sciences; NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences
Tieniu Tan
Tieniu Tan
Institute of Automation, Chinese Academy of Sciences