DCR: Quantifying Data Contamination in LLMs Evaluation

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) are frequently evaluated on benchmarks contaminated by training data (Benchmark Data Contamination, BDC), leading to inflated performance estimates and misleading generalization assessments. To address this, we propose DCR—a novel framework that quantifies contamination risk across four fine-grained dimensions: semantics, information, data, and labels. DCR employs a fuzzy inference system to integrate multi-level contamination scores, yielding interpretable, contamination-aware performance corrections. The method is lightweight, transparent, and broadly applicable, enabling automated, low-overhead contamination detection. Extensive experiments across nine mainstream LLMs and three core NLP tasks demonstrate that DCR-corrected accuracy deviates from clean (uncontaminated) baselines by only 4% on average—significantly recovering models’ true generalization capability. This substantially enhances the fairness, reliability, and interpretability of LLM evaluation.

Technology Category

Application Category

📝 Abstract
The rapid advancement of large language models (LLMs) has heightened concerns about benchmark data contamination (BDC), where models inadvertently memorize evaluation data, inflating performance metrics and undermining genuine generalization assessment. This paper introduces the Data Contamination Risk (DCR) framework, a lightweight, interpretable pipeline designed to detect and quantify BDC across four granular levels: semantic, informational, data, and label. By synthesizing contamination scores via a fuzzy inference system, DCR produces a unified DCR Factor that adjusts raw accuracy to reflect contamination-aware performance. Validated on 9 LLMs (0.5B-72B) across sentiment analysis, fake news detection, and arithmetic reasoning tasks, the DCR framework reliably diagnoses contamination severity and with accuracy adjusted using the DCR Factor to within 4% average error across the three benchmarks compared to the uncontaminated baseline. Emphasizing computational efficiency and transparency, DCR provides a practical tool for integrating contamination assessment into routine evaluations, fostering fairer comparisons and enhancing the credibility of LLM benchmarking practices.
Problem

Research questions and friction points this paper is trying to address.

Detects and quantifies benchmark data contamination in LLMs
Adjusts performance metrics to reflect contamination impact
Validates contamination severity across multiple tasks and models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight interpretable pipeline for BDC detection
Fuzzy inference system synthesizing contamination scores
DCR Factor adjusts accuracy for contamination awareness
🔎 Similar Papers
No similar papers found.
C
Cheng Xu
University College Dublin
N
Nan Yan
Georgia Institute of Technology
Shuhao Guan
Shuhao Guan
University College Dublin
C
Changhong Jin
University College Dublin
Y
Yuke Mei
Bebxy
Y
Yibing Guo
University College Dublin
M
M-Tahar Kechadi
University College Dublin