Language model developers should report train-test overlap

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
Widespread train-test overlap—often unreported—undermines the credibility and interpretability of language model evaluations. Method: We conduct the first systematic audit of disclosure practices across 30 major model developers, combining empirical data auditing, policy analysis, and developer interviews. Contribution/Results: Only 9 (30%) disclose overlap information—4 via open training data, 5 via statistics or methodology. Based on these findings, we propose a transparency reporting standard mandating public disclosure of either overlap statistics or representative training-data subsets. Our advocacy directly prompted three organizations to adopt critical new disclosures. This work has catalyzed community consensus on trustworthy evaluation benchmarks and established a methodological foundation and practical pathway for reproducible, verifiable large language model assessment.

Technology Category

Application Category

📝 Abstract
Language models are extensively evaluated, but correctly interpreting evaluation results requires knowledge of train-test overlap which refers to the extent to which the language model is trained on the very data it is being tested on. The public currently lacks adequate information about train-test overlap: most models have no public train-test overlap statistics, and third parties cannot directly measure train-test overlap since they do not have access to the training data. To make this clear, we document the practices of 30 model developers, finding that just 9 developers report train-test overlap: 4 developers release training data under open-source licenses, enabling the community to directly measure train-test overlap, and 5 developers publish their train-test overlap methodology and statistics. By engaging with language model developers, we provide novel information about train-test overlap for three additional developers. Overall, we take the position that language model developers should publish train-test overlap statistics and/or training data whenever they report evaluation results on public test sets. We hope our work increases transparency into train-test overlap to increase the community-wide trust in model evaluations.
Problem

Research questions and friction points this paper is trying to address.

Assess train-test overlap in language model evaluations
Address lack of public train-test overlap statistics
Promote transparency by urging developers to publish overlap data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Document train-test overlap practices of developers
Engage developers for novel overlap information
Advocate publishing overlap statistics and data
🔎 Similar Papers
No similar papers found.