🤖 AI Summary
This work addresses the pervasive issue of “variant contamination” in large language model (LLM) evaluation, where training data contain samples semantically equivalent but syntactically distinct from test items, leading models to rely on memorization rather than genuine reasoning and thereby inflating performance scores. We formally characterize this problem for the first time and introduce the first benchmark dataset designed to assess variant contamination. Furthermore, we propose DVD (Distributional Variance Detection), a novel method that leverages local variance in the generation distribution under temperature sampling to detect contamination at the individual sample level, identifying anomalous shifts between memory reliance and perturbation-induced drift. Experiments demonstrate that DVD significantly outperforms baseline approaches—including perplexity, Min-k%++, edit distance (CDD), and embedding similarity—on Omni-MATH and SuperGPQA, while exhibiting strong robustness to hyperparameter choices.
📝 Abstract
Evaluating large language models (LLMs) is increasingly confounded by \emph{variant contamination}: the training corpus contains semantically equivalent yet lexically or syntactically altered versions of test items. Unlike verbatim leakage, these paraphrased or structurally transformed variants evade existing detectors based on sampling consistency or perplexity, thereby inflating benchmark scores via memorization rather than genuine reasoning. We formalize this problem and introduce \textbf{DVD} (\textbf{D}etection via \textbf{V}ariance of generation \textbf{D}istribution), a single-sample detector that models the local output distribution induced by temperature sampling. Our key insight is that contaminated items trigger alternation between a \emph{memory-adherence} state and a \emph{perturbation-drift} state, yielding abnormally high variance in the synthetic difficulty of low-probability tokens; uncontaminated items remain in drift with comparatively smooth variance. We construct the first benchmark for variant contamination across two domains Omni-MATH and SuperGPQA by generating and filtering semantically equivalent variants, and simulate contamination via fine-tuning models of different scales and architectures (Qwen2.5 and Llama3.1). Across datasets and models, \textbf{DVD} consistently outperforms perplexity-based, Min-$k$\%++, edit-distance (CDD), and embedding-similarity baselines, while exhibiting strong robustness to hyperparameters. Our results establish variance of the generation distribution as a principled and practical fingerprint for detecting variant contamination in LLM evaluation.