STaR: Sensitive Trajectory Regulation for Unlearning in Large Reasoning Models

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical privacy risk in large reasoning models, which often inadvertently leak sensitive information through intermediate steps of complex chain-of-thought generation. Existing unlearning methods, limited to sanitizing only the final output, fail to protect privacy throughout the entire reasoning trajectory. To bridge this gap, we propose the first inference-time unlearning framework that operates without modifying model parameters. Our approach dynamically blocks privacy leakage across the full reasoning chain by integrating semantic-aware detection, secure prompt prefix injection, trajectory-aware suppression, and token-level adaptive filtering. We introduce novel evaluation metrics—including multi-decoding consistency assessment and multi-granularity membership inference attacks—and demonstrate on the R-TOFU benchmark that our method significantly reduces privacy exposure while incurring minimal degradation in reasoning utility.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models (LRMs) have advanced automated multi-step reasoning, but their ability to generate complex Chain-of-Thought (CoT) trajectories introduces severe privacy risks, as sensitive information may be deeply embedded throughout the reasoning process. Existing Large Language Models (LLMs) unlearning approaches that typically focus on modifying only final answers are insufficient for LRMs, as they fail to remove sensitive content from intermediate steps, leading to persistent privacy leakage and degraded security. To address these challenges, we propose Sensitive Trajectory Regulation (STaR), a parameter-free, inference-time unlearning framework that achieves robust privacy protection throughout the reasoning process. Specifically, we first identify sensitive content via semantic-aware detection. Then, we inject global safety constraints through secure prompt prefix. Next, we perform trajectory-aware suppression to dynamically block sensitive content across the entire reasoning chain. Finally, we apply token-level adaptive filtering to prevent both exact and paraphrased sensitive tokens during generation. Furthermore, to overcome the inadequacies of existing evaluation protocols, we introduce two metrics: Multi-Decoding Consistency Assessment (MCS), which measures the consistency of unlearning across diverse decoding strategies, and Multi-Granularity Membership Inference Attack (MIA) Evaluation, which quantifies privacy protection at both answer and reasoning-chain levels. Experiments on the R-TOFU benchmark demonstrate that STaR achieves comprehensive and stable unlearning with minimal utility loss, setting a new standard for privacy-preserving reasoning in LRMs.
Problem

Research questions and friction points this paper is trying to address.

Large Reasoning Models
Chain-of-Thought
unlearning
privacy leakage
sensitive information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sensitive Trajectory Regulation
Chain-of-Thought Unlearning
Inference-Time Privacy
Trajectory-Aware Suppression
Multi-Granularity MIA
🔎 Similar Papers
No similar papers found.