๐ค AI Summary
Large language models (LLMs) often inadvertently leak sensitive information within chain-of-thought (CoT) reasoning traces, violating contextual privacy expectations; existing defenses primarily target output-layer leakage. Method: We propose a lightweight, fine-tuning-free test-time intervention that identifies privacy-critical hidden layers via layer-wise sensitivity analysis and dynamically injects targeted guidance vectors to suppress privacy leakage in reasoning traces. We introduce the Chain-of-Thought Privacy Leakage (CPL) metric for quantitative assessment. Contribution/Results: Evaluated on the AirGapAgent-R benchmark across QwQ-32B, Llama-3.1-8B, and Deepseek models, our method reduces CPL by 18.2%, 17.9%, and 31.2%, respectively, with zero task performance degradation. To our knowledge, this is the first approach enabling fine-grained, real-time, non-intrusive privacy protection over intermediate reasoning statesโwithout modifying model parameters or inference architecture.
๐ Abstract
As Large Language Models (LLMs) evolve into personal assistants with access to sensitive user data, they face a critical privacy challenge: while prior work has addressed output-level privacy, recent findings reveal that LLMs often leak private information through their internal reasoning processes, violating contextual privacy expectations. These leaky thoughts occur when models inadvertently expose sensitive details in their reasoning traces, even when final outputs appear safe. The challenge lies in preventing such leakage without compromising the model's reasoning capabilities, requiring a delicate balance between privacy and utility. We introduce Steering Activations towards Leakage-free Thinking (SALT), a lightweight test-time intervention that mitigates privacy leakage in model's Chain of Thought (CoT) by injecting targeted steering vectors into hidden state. We identify the high-leakage layers responsible for this behavior. Through experiments across multiple LLMs, we demonstrate that SALT achieves reductions including $18.2%$ reduction in CPL on QwQ-32B, $17.9%$ reduction in CPL on Llama-3.1-8B, and $31.2%$ reduction in CPL on Deepseek in contextual privacy leakage dataset AirGapAgent-R while maintaining comparable task performance and utility. Our work establishes SALT as a practical approach for test-time privacy protection in reasoning-capable language models, offering a path toward safer deployment of LLM-based personal agents.