DVD: A Robust Method for Detecting Variant Contamination in Large Language Model Evaluation

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the pervasive issue of “variant contamination” in large language model (LLM) evaluation, where training data contain samples semantically equivalent but syntactically distinct from test items, leading models to rely on memorization rather than genuine reasoning and thereby inflating performance scores. We formally characterize this problem for the first time and introduce the first benchmark dataset designed to assess variant contamination. Furthermore, we propose DVD (Distributional Variance Detection), a novel method that leverages local variance in the generation distribution under temperature sampling to detect contamination at the individual sample level, identifying anomalous shifts between memory reliance and perturbation-induced drift. Experiments demonstrate that DVD significantly outperforms baseline approaches—including perplexity, Min-k%++, edit distance (CDD), and embedding similarity—on Omni-MATH and SuperGPQA, while exhibiting strong robustness to hyperparameter choices.

Technology Category

Application Category

📝 Abstract
Evaluating large language models (LLMs) is increasingly confounded by \emph{variant contamination}: the training corpus contains semantically equivalent yet lexically or syntactically altered versions of test items. Unlike verbatim leakage, these paraphrased or structurally transformed variants evade existing detectors based on sampling consistency or perplexity, thereby inflating benchmark scores via memorization rather than genuine reasoning. We formalize this problem and introduce \textbf{DVD} (\textbf{D}etection via \textbf{V}ariance of generation \textbf{D}istribution), a single-sample detector that models the local output distribution induced by temperature sampling. Our key insight is that contaminated items trigger alternation between a \emph{memory-adherence} state and a \emph{perturbation-drift} state, yielding abnormally high variance in the synthetic difficulty of low-probability tokens; uncontaminated items remain in drift with comparatively smooth variance. We construct the first benchmark for variant contamination across two domains Omni-MATH and SuperGPQA by generating and filtering semantically equivalent variants, and simulate contamination via fine-tuning models of different scales and architectures (Qwen2.5 and Llama3.1). Across datasets and models, \textbf{DVD} consistently outperforms perplexity-based, Min-$k$\%++, edit-distance (CDD), and embedding-similarity baselines, while exhibiting strong robustness to hyperparameters. Our results establish variance of the generation distribution as a principled and practical fingerprint for detecting variant contamination in LLM evaluation.
Problem

Research questions and friction points this paper is trying to address.

variant contamination
large language model evaluation
paraphrased leakage
benchmark integrity
training-test overlap
Innovation

Methods, ideas, or system contributions that make the work stand out.

variant contamination
generation distribution variance
DVD
LLM evaluation
memory-adherence
🔎 Similar Papers
No similar papers found.
R
Renzhao Liang
Beihang University
J
Jingru Chen
Peking University
B
Bo Jia
Beijing University of Posts and Telecommunications
B
Bo Deng
Beihang University
C
Chenggang Xie
Beihang University
Y
Yidong Wang
Peking University
Ke Jin
Ke Jin
Professor at Beijing Institute of Technology
Radiation damageIon Beam Analysishigh entropy alloysNuclear Material
Linfeng Zhang
Linfeng Zhang
DP Technology; AI for Science Institute
AI for Sciencemulti-scale modelingmolecular simulationdrug/materials design
Cunxiang Wang
Cunxiang Wang
Tsinghua University; ZhipuAI
Large Language ModelsLLM EvaluationLLM Post-training