Data Contamination Quiz: A Tool to Detect and Estimate Contamination in Large Language Models

📅 2023-11-10
🏛️ arXiv.org
📈 Citations: 23
Influential: 3
📄 PDF
🤖 AI Summary
This paper addresses the challenge of detecting training data contamination in large language models (LLMs). We propose the Data Contamination Quiz (DCQ), a parameter-free, zero-shot contamination detection method that frames detection as a multiple-choice discrimination task. DCQ generates semantically preserved, word-level perturbations to construct distractors and quantifies contamination likelihood by measuring an LLM’s preference strength for the original input over perturbed alternatives. Our approach introduces a novel “semantically indistinguishable yet lexically matchable” paradigm—requiring no training data or model fine-tuning—and inherently bypasses copyright-aware safety filters. To enhance robustness, we incorporate positional bias correction. Evaluated across multiple LLMs and datasets, DCQ achieves state-of-the-art performance. Moreover, it systematically reveals, for the first time, significantly higher levels of memorized data contamination in LLMs than previously recognized.
📝 Abstract
We propose the Data Contamination Quiz (DCQ), a simple and effective approach to detect data contamination in large language models (LLMs) and estimate the amount of it. Specifically, we frame data contamination detection as a series of multiple-choice questions, devising a quiz format wherein three perturbed versions of each instance, subsampled from a specific dataset partition, are created. These changes only include word-level perturbations. The generated perturbations, along with the original dataset instance, form the options in the DCQ, with an extra option accommodating the selection of none of the provided options. Given that the only distinguishing signal among the options is the exact wording with respect to the original dataset instance, an LLM, when tasked with identifying the original dataset instance, gravitates towards selecting the original one if it has been exposed to it. While accounting for positional biases in LLMs, the quiz performance reveals the contamination level for the tested model with the dataset partition to which the quiz pertains. Applied to various datasets and LLMs, under controlled and uncontrolled contamination, our findings, while fully lacking access to training data and model parameters, suggest that DCQ achieves state-of-the-art results and uncovers greater contamination levels through memorization compared to existing methods. Also, it proficiently bypasses more safety filters, especially those set to avoid generating copyrighted content.
Problem

Research questions and friction points this paper is trying to address.

Detect data contamination in large language models
Estimate contamination levels using multiple-choice quizzes
Bypass safety filters to uncover memorized content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses multiple-choice quiz for contamination detection
Applies word-level perturbations to dataset instances
Measures memorization via original instance selection bias