Training on the Test Task Confounds Evaluation and Emergence

📅 2024-07-10
🏛️ arXiv.org
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a previously overlooked phenomenon in large language model (LLM) evaluation—“training-on-test tasks”—where task-specific knowledge is implicitly incorporated during training, distorting relative model rankings and inducing spurious claims of emergent capabilities. Unlike data leakage, this effect systematically undermines evaluation validity. We formally define and quantify its impact for the first time. To mitigate it, we propose task-aligned fine-tuning—a controlled-variable correction method that isolates benchmark performance attribution via cross-model consistent training. After correction, the relative ordering of mainstream models shifts significantly; moreover, several purported “emergent abilities” exhibit smooth, continuous improvement under gradual task exposure, losing their abruptness. This confirms that such phenomena stem from evaluation bias—not genuine capability discontinuities. Our approach enables more faithful, interpretable LLM assessment and challenges prevailing assumptions about emergence in scaling laws.

Technology Category

Application Category

📝 Abstract
We study a fundamental problem in the evaluation of large language models that we call training on the test task. Unlike wrongful practices like training on the test data, leakage, or data contamination, training on the test task is not a malpractice. Rather, the term describes a growing set of practices that utilize knowledge about evaluation tasks at training time. We demonstrate that training on the test task confounds both relative model evaluations and claims about emergent capabilities. We argue that the seeming superiority of one model family over another may be explained by a different degree of training on the test task. To this end, we propose an effective method to adjust for the effect of training on the test task on benchmark evaluations. Put simply, to fine-tune each model under comparison on the same task-relevant data prior to evaluation. We then show that instances of emergent behavior disappear gradually as models train on the test task. Our work promotes a new perspective on the evaluation of large language models, with broad implications for benchmarking and the study of emergent capabilities.
Problem

Research questions and friction points this paper is trying to address.

Evaluating large language models confounded by test task training
Different model superiority may stem from varied test task training
Emergent behaviors diminish with increased test task training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adjust models with same task-relevant data
Training on test task confounds evaluations
Emergent behaviors fade with test task training
🔎 Similar Papers
No similar papers found.