Estonian Native Large Language Model Benchmark

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A scarcity of native evaluation resources and systematic benchmarks hinders rigorous assessment of large language models (LLMs) for Estonian, a low-resource language. Method: We introduce the first native Estonian LLM Benchmark, comprising seven task categories—grammar understanding, knowledge QA, summarization, contextual reasoning, and more—built exclusively from authentic, non–machine-translated corpora. To ensure reliability, we propose a dual-validation framework integrating human evaluation and LLM-as-a-judge scoring. Contribution/Results: Evaluating six base models and twenty-six instruction-tuned variants, we demonstrate that commercial models (e.g., Claude 3.7 Sonnet) consistently outperform open-weight counterparts. Our analysis confirms strong inter-rater agreement between LLM judges and human annotators (Spearman’s ρ > 0.92), validating high-quality LLMs as scalable, reliable proxies for linguistic capability assessment. This work establishes a novel, reproducible paradigm for evaluating LLMs in low-resource languages.

Technology Category

Application Category

📝 Abstract
The availability of LLM benchmarks for the Estonian language is limited, and a comprehensive evaluation comparing the performance of different LLMs on Estonian tasks has yet to be conducted. We introduce a new benchmark for evaluating LLMs in Estonian, based on seven diverse datasets. These datasets assess general and domain-specific knowledge, understanding of Estonian grammar and vocabulary, summarization abilities, contextual comprehension, and more. The datasets are all generated from native Estonian sources without using machine translation. We compare the performance of base models, instruction-tuned open-source models, and commercial models. Our evaluation includes 6 base models and 26 instruction-tuned models. To assess the results, we employ both human evaluation and LLM-as-a-judge methods. Human evaluation scores showed moderate to high correlation with benchmark evaluations, depending on the dataset. Claude 3.7 Sonnet, used as an LLM judge, demonstrated strong alignment with human ratings, indicating that top-performing LLMs can effectively support the evaluation of Estonian-language models.
Problem

Research questions and friction points this paper is trying to address.

Limited LLM benchmarks for Estonian language evaluation
Comprehensive performance comparison lacking for Estonian LLMs
Need native Estonian datasets without machine translation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Native Estonian benchmark without machine translation
Human evaluation combined with LLM-as-judge methods
Comprehensive comparison of 32 open and commercial models
🔎 Similar Papers
No similar papers found.
H
Helena Grete Lillepalu
Department of Software Science, Tallinn University of Technology, Estonia
Tanel Alumäe
Tanel Alumäe
Professor of Speech Processing, Tallinn University of Technology
Speech recognitionNatural language processing