🤖 AI Summary
Large language models (LLMs) suffer from data contamination due to overlap between training corpora and static evaluation benchmarks, compromising evaluation integrity. Method: This work systematically analyzes the evolution of LLM evaluation paradigms and advocates a transition from static to dynamic benchmarking. It identifies the lack of standardized assessment criteria in current dynamic evaluation efforts and proposes, for the first time, a principled framework for dynamic benchmarks—encompassing task construction, data isolation, temporal validity control, and reproducibility assurance. Based on these principles, we develop an open-source evaluation repository (hosted on GitHub) that unifies static and dynamic methodologies, enabling contamination-aware experimentation and continuous benchmark updates. Contribution/Results: We deliver the first methodology framework explicitly designed for mitigating data contamination in LLM evaluation, accompanied by reusable design guidelines and open infrastructure—thereby addressing a critical gap in standardization and advancing rigorous, temporally grounded model assessment.
📝 Abstract
Data contamination has received increasing attention in the era of large language models (LLMs) due to their reliance on vast Internet-derived training corpora. To mitigate the risk of potential data contamination, LLM benchmarking has undergone a transformation from static to dynamic benchmarking. In this work, we conduct an in-depth analysis of existing static to dynamic benchmarking methods aimed at reducing data contamination risks. We first examine methods that enhance static benchmarks and identify their inherent limitations. We then highlight a critical gap-the lack of standardized criteria for evaluating dynamic benchmarks. Based on this observation, we propose a series of optimal design principles for dynamic benchmarking and analyze the limitations of existing dynamic benchmarks. This survey provides a concise yet comprehensive overview of recent advancements in data contamination research, offering valuable insights and a clear guide for future research efforts. We maintain a GitHub repository to continuously collect both static and dynamic benchmarking methods for LLMs. The repository can be found at this link.