Detecting RLVR Training Data via Structural Convergence of Reasoning

๐Ÿ“… 2026-02-12
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of benchmark contamination in reinforcement learningโ€“based model fine-tuning (RLVR), where existing likelihood-based detection methods are limited due to opaque training data. The study is the first to reveal that RLVR training induces a collapse in generation diversity, manifesting as structural convergence across outputs. To exploit this behavior, the authors propose a black-box detection method that requires neither reference models nor token-level probabilities. By sampling multiple generations from the same prompt and measuring their structural convergence via a Min-$k$NN edit distance metric, the approach effectively identifies RLVR-trained models. Experiments demonstrate that this method significantly outperforms current membership inference and contamination detection baselines across multiple RLVR inference models, offering an efficient and practical solution for tracing training data provenance.

Technology Category

Application Category

๐Ÿ“ Abstract
Reinforcement learning with verifiable rewards (RLVR) is central to training modern reasoning models, but the undisclosed training data raises concerns about benchmark contamination. Unlike pretraining methods, which optimize models using token-level probabilities, RLVR fine-tunes models based on reward feedback from self-generated reasoning trajectories, making conventional likelihood-based detection methods less effective. We show that RLVR induces a distinctive behavioral signature: prompts encountered during RLVR training result in more rigid and similar generations, while unseen prompts retain greater diversity. We introduce Min-$k$NN Distance, a simple black-box detector that quantifies this collapse by sampling multiple completions for a given prompt and computing the average of the $k$ smallest nearest-neighbor edit distances. Min-$k$NN Distance requires no access to the reference model or token probabilities. Experiments across multiple RLVR-trained reasoning models show that Min-$k$NN Distance reliably distinguishes RL-seen examples from unseen ones and outperforms existing membership inference and RL contamination detection baselines.
Problem

Research questions and friction points this paper is trying to address.

RLVR
training data detection
benchmark contamination
reasoning models
membership inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

RLVR
Min-kNN Distance
reasoning trajectory
black-box detection
structural convergence
๐Ÿ”Ž Similar Papers
No similar papers found.