When Reasoning Leaks Membership: Membership Inference Attack on Black-box Large Reasoning Models

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel privacy threat in black-box large reasoning models (LRMs): the intermediate reasoning traces exposed via APIs can leak membership information about training data. To address this, we propose BlackSpectrum, the first membership inference attack framework tailored for black-box LRMs. By modeling distributional discrepancies of reasoning traces in a semantic latent space, BlackSpectrum constructs a “recall–reasoning” axis to distinguish whether a given sample belongs to the model’s training set. We introduce two new datasets, arXivReasoning and BookReasoning, to support this investigation and design an attack mechanism based on latent-space analysis and membership scoring. Experimental results demonstrate that exposure of reasoning traces significantly increases attack success rates, revealing a critical trade-off between model transparency and privacy preservation.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models (LRMs) have rapidly gained prominence for their strong performance in solving complex tasks. Many modern black-box LRMs expose the intermediate reasoning traces through APIs to improve transparency (e.g., Gemini-2.5 and Claude-sonnet). Despite their benefits, we find that these traces can leak membership signals, creating a new privacy threat even without access to token logits used in prior attacks. In this work, we initiate the first systematic exploration of Membership Inference Attacks (MIAs) on black-box LRMs. Our preliminary analysis shows that LRMs produce confident, recall-like reasoning traces on familiar training member samples but more hesitant, inference-like reasoning traces on non-members. The representations of these traces are continuously distributed in the semantic latent space, spanning from familiar to unfamiliar samples. Building on this observation, we propose BlackSpectrum, the first membership inference attack framework targeting the black-box LRMs. The key idea is to construct a recall-inference axis in the semantic latent space, based on representations derived from the exposed traces. By locating where a query sample falls along this axis, the attacker can obtain a membership score and predict how likely it is to be a member of the training data. Additionally, to address the limitations of outdated datasets unsuited to modern LRMs, we provide two new datasets to support future research, arXivReasoning and BookReasoning. Empirically, exposing reasoning traces significantly increases the vulnerability of LRMs to membership inference attacks, leading to large gains in attack performance. Our findings highlight the need for LRM companies to balance transparency in intermediate reasoning traces with privacy preservation.
Problem

Research questions and friction points this paper is trying to address.

Membership Inference Attack
Large Reasoning Models
Reasoning Traces
Privacy Leakage
Black-box Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Membership Inference Attack
Large Reasoning Models
Reasoning Traces
Black-box Attack
Semantic Latent Space
🔎 Similar Papers
No similar papers found.
R
Ruihan Hu
Beijing University of Posts and Telecommunications
Yu-Ming Shang
Yu-Ming Shang
Beijing University of Posts and Telecommunications
Natural Language ProcesingInformation Extraction
W
Wei Luo
Beijing University of Posts and Telecommunications
Y
Ye Tao
China Unicom Research Institute
Xi Zhang
Xi Zhang
Professor, Beijing University of Posts and Telecommunications
Data MiningComputer ArchitectureTrustworthy AI