🤖 AI Summary
Large language models (LLMs) are frequently evaluated on benchmarks contaminated by training data (Benchmark Data Contamination, BDC), leading to inflated performance estimates and misleading generalization assessments. To address this, we propose DCR—a novel framework that quantifies contamination risk across four fine-grained dimensions: semantics, information, data, and labels. DCR employs a fuzzy inference system to integrate multi-level contamination scores, yielding interpretable, contamination-aware performance corrections. The method is lightweight, transparent, and broadly applicable, enabling automated, low-overhead contamination detection. Extensive experiments across nine mainstream LLMs and three core NLP tasks demonstrate that DCR-corrected accuracy deviates from clean (uncontaminated) baselines by only 4% on average—significantly recovering models’ true generalization capability. This substantially enhances the fairness, reliability, and interpretability of LLM evaluation.
📝 Abstract
The rapid advancement of large language models (LLMs) has heightened concerns about benchmark data contamination (BDC), where models inadvertently memorize evaluation data, inflating performance metrics and undermining genuine generalization assessment. This paper introduces the Data Contamination Risk (DCR) framework, a lightweight, interpretable pipeline designed to detect and quantify BDC across four granular levels: semantic, informational, data, and label. By synthesizing contamination scores via a fuzzy inference system, DCR produces a unified DCR Factor that adjusts raw accuracy to reflect contamination-aware performance. Validated on 9 LLMs (0.5B-72B) across sentiment analysis, fake news detection, and arithmetic reasoning tasks, the DCR framework reliably diagnoses contamination severity and with accuracy adjusted using the DCR Factor to within 4% average error across the three benchmarks compared to the uncontaminated baseline. Emphasizing computational efficiency and transparency, DCR provides a practical tool for integrating contamination assessment into routine evaluations, fostering fairer comparisons and enhancing the credibility of LLM benchmarking practices.