๐ค AI Summary
This work addresses the challenge of benchmark contamination in reinforcement learningโbased model fine-tuning (RLVR), where existing likelihood-based detection methods are limited due to opaque training data. The study is the first to reveal that RLVR training induces a collapse in generation diversity, manifesting as structural convergence across outputs. To exploit this behavior, the authors propose a black-box detection method that requires neither reference models nor token-level probabilities. By sampling multiple generations from the same prompt and measuring their structural convergence via a Min-$k$NN edit distance metric, the approach effectively identifies RLVR-trained models. Experiments demonstrate that this method significantly outperforms current membership inference and contamination detection baselines across multiple RLVR inference models, offering an efficient and practical solution for tracing training data provenance.
๐ Abstract
Reinforcement learning with verifiable rewards (RLVR) is central to training modern reasoning models, but the undisclosed training data raises concerns about benchmark contamination. Unlike pretraining methods, which optimize models using token-level probabilities, RLVR fine-tunes models based on reward feedback from self-generated reasoning trajectories, making conventional likelihood-based detection methods less effective. We show that RLVR induces a distinctive behavioral signature: prompts encountered during RLVR training result in more rigid and similar generations, while unseen prompts retain greater diversity. We introduce Min-$k$NN Distance, a simple black-box detector that quantifies this collapse by sampling multiple completions for a given prompt and computing the average of the $k$ smallest nearest-neighbor edit distances. Min-$k$NN Distance requires no access to the reference model or token probabilities. Experiments across multiple RLVR-trained reasoning models show that Min-$k$NN Distance reliably distinguishes RL-seen examples from unseen ones and outperforms existing membership inference and RL contamination detection baselines.