🤖 AI Summary
To address the lack of systematic evaluation benchmarks for Czech large language models (LLMs), this work introduces CzechEval—the first comprehensive, native Czech evaluation benchmark. It comprises 50 tasks (including 14 newly curated ones) across eight linguistic domains, supports multi-format inputs and multi-metric evaluation. Methodologically, we propose a statistically significant dual-evaluation decoupling scoring mechanism and integrate social preference theory for cross-task aggregation. Concurrently, we release BUT-Large—a clean, traceable, large-scale Czech corpus with contamination analysis—and open-source the first Czech-centric 7B LLM. A real-time leaderboard is deployed on Hugging Face Spaces. To date, CzechEval has integrated over 50 model submissions, establishing a standardized, community-driven evaluation infrastructure for Czech LLM research.
📝 Abstract
We present BenCzechMark (BCM), the first comprehensive Czech language benchmark designed for large language models, offering diverse tasks, multiple task formats, and multiple evaluation metrics. Its duel scoring system is grounded in statistical significance theory and uses aggregation across tasks inspired by social preference theory. Our benchmark encompasses 50 challenging tasks, with corresponding test datasets, primarily in native Czech, with 14 newly collected ones. These tasks span 8 categories and cover diverse domains, including historical Czech news, essays from pupils or language learners, and spoken word. Furthermore, we collect and clean BUT-Large Czech Collection, the largest publicly available clean Czech language corpus, and use it for (i) contamination analysis and (ii) continuous pretraining of the first Czech-centric 7B language model with Czech-specific tokenization. We use our model as a baseline for comparison with publicly available multilingual models. Lastly, we release and maintain a leaderboard with existing 50 model submissions, where new model submissions can be made at https://huggingface.co/spaces/CZLC/BenCzechMark.