PaCoST: Paired Confidence Significance Testing for Benchmark Contamination Detection in Large Language Models

📅 2024-06-26
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from benchmark contamination—where training data overlaps with evaluation benchmarks (e.g., MMLU, GSM8K, HumanEval)—leading to inflated leaderboard scores and distorted generalization estimates. To address this, we propose the first practice-oriented benchmark contamination detection framework: Paired Confidence Significance Testing (PCST). PCST generates distributionally consistent perturbed inputs, models the confidence scores of model outputs on both original and perturbed samples, and statistically tests for significant confidence disparities—without requiring access to model parameters or generated text. The method satisfies three practical desiderata: interpretability, cross-benchmark generalizability, and deployment efficiency. Empirical evaluation across major open-weight models—including Llama, Mistral, and Qwen—reveals statistically significant contamination evidence in nearly all model–benchmark pairs. Our work establishes a new paradigm for trustworthy LLM evaluation and model auditing.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are known to be trained on vast amounts of data, which may unintentionally or intentionally include data from commonly used benchmarks. This inclusion can lead to cheatingly high scores on model leaderboards, yet result in disappointing performance in real-world applications. To address this benchmark contamination problem, we first propose a set of requirements that practical contamination detection methods should follow. Following these proposed requirements, we introduce PaCoST, a Paired Confidence Significance Testing to effectively detect benchmark contamination in LLMs. Our method constructs a counterpart for each piece of data with the same distribution, and performs statistical analysis of the corresponding confidence to test whether the model is significantly more confident under the original benchmark. We validate the effectiveness of PaCoST and apply it on popular open-source models and benchmarks. We find that almost all models and benchmarks we tested are suspected contaminated more or less. We finally call for new LLM evaluation methods.
Problem

Research questions and friction points this paper is trying to address.

Detects benchmark contamination in large language models
Proposes PaCoST for statistical confidence significance testing
Identifies contamination in popular models and benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

PaCoST detects benchmark contamination in LLMs
Constructs data counterparts for statistical analysis
Validates contamination in popular models and benchmarks