C2LEVA: Toward Comprehensive and Contamination-Free Language Model Evaluation

📅 2024-12-06
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
LLMs face severe credibility challenges in evaluation due to training data contamination—especially problematic given the inaccessibility of proprietary training corpora. To address this, we propose the first systematic contamination-free bilingual (Chinese–English) LLM evaluation benchmark, encompassing 22 fine-grained capability tasks. Our method introduces an automated contamination mitigation framework comprising: (i) a dynamic dataset rotation pipeline, (ii) a sensitive information filtering algorithm, (iii) a bilingual task design framework, and (iv) a standardized evaluation protocol—ensuring end-to-end privacy preservation across the evaluation data lifecycle. The resulting “bilingual, multidimensional, contamination-free” evaluation paradigm is rigorously validated on 15 open- and closed-weight LLMs. Empirical results demonstrate substantially improved evaluation integrity and reproducibility, establishing a new standard for fair, reliable, and comparable LLM assessment.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models (LLMs) have shown significant promise, yet their evaluation raises concerns, particularly regarding data contamination due to the lack of access to proprietary training data. To address this issue, we present C$^2$LEVA, a comprehensive bilingual benchmark featuring systematic contamination prevention. C$^2$LEVA firstly offers a holistic evaluation encompassing 22 tasks, each targeting a specific application or ability of LLMs, and secondly a trustworthy assessment due to our contamination-free tasks, ensured by a systematic contamination prevention strategy that fully automates test data renewal and enforces data protection during benchmark data release. Our large-scale evaluation of 15 open-source and proprietary models demonstrates the effectiveness of C$^2$LEVA.
Problem

Research questions and friction points this paper is trying to address.

Addresses data contamination in LLM evaluation
Provides comprehensive bilingual benchmark for LLMs
Ensures trustworthy assessment with contamination-free tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive bilingual benchmark for LLM evaluation
Systematic contamination prevention strategy
Automated test data renewal and protection
🔎 Similar Papers
No similar papers found.
Yanyang Li
Yanyang Li
The Chinese University of Hong Kong
Natural Language Processing
T
Tin Long Wong
Department of Computer Science and Engineering, The Chinese University of Hong Kong
C
Cheung To Hung
Department of Computer Science and Engineering, The Chinese University of Hong Kong
Jianqiao Zhao
Jianqiao Zhao
Department of Computer Science and Engineering, The Chinese University of Hong Kong
Duo Zheng
Duo Zheng
The Chinese University of Hong Kong
Computer Vision
K
Ka Wai Liu
Department of Computer Science and Engineering, The Chinese University of Hong Kong
Michael R. Lyu
Michael R. Lyu
Professor of Computer Science & Engineering, The Chinese University of Hong Kong
software engineeringsoftware reliabilityfault tolerancemachine learningdistributed systems
L
Liwei Wang
Department of Computer Science and Engineering, The Chinese University of Hong Kong