🤖 AI Summary
This work identifies a previously overlooked phenomenon in large language model (LLM) evaluation—“training-on-test tasks”—where task-specific knowledge is implicitly incorporated during training, distorting relative model rankings and inducing spurious claims of emergent capabilities. Unlike data leakage, this effect systematically undermines evaluation validity. We formally define and quantify its impact for the first time. To mitigate it, we propose task-aligned fine-tuning—a controlled-variable correction method that isolates benchmark performance attribution via cross-model consistent training. After correction, the relative ordering of mainstream models shifts significantly; moreover, several purported “emergent abilities” exhibit smooth, continuous improvement under gradual task exposure, losing their abruptness. This confirms that such phenomena stem from evaluation bias—not genuine capability discontinuities. Our approach enables more faithful, interpretable LLM assessment and challenges prevailing assumptions about emergence in scaling laws.
📝 Abstract
We study a fundamental problem in the evaluation of large language models that we call training on the test task. Unlike wrongful practices like training on the test data, leakage, or data contamination, training on the test task is not a malpractice. Rather, the term describes a growing set of practices that utilize knowledge about evaluation tasks at training time. We demonstrate that training on the test task confounds both relative model evaluations and claims about emergent capabilities. We argue that the seeming superiority of one model family over another may be explained by a different degree of training on the test task. To this end, we propose an effective method to adjust for the effect of training on the test task on benchmark evaluations. Put simply, to fine-tune each model under comparison on the same task-relevant data prior to evaluation. We then show that instances of emergent behavior disappear gradually as models train on the test task. Our work promotes a new perspective on the evaluation of large language models, with broad implications for benchmarking and the study of emergent capabilities.