🤖 AI Summary
Systematic evaluation of large language models (LLMs) for non-English languages—particularly Italian—suffers from fragmented benchmarks, inconsistent frameworks, and a lack of comprehensive, standardized assessment protocols.
Method: We introduce the first holistic evaluation benchmark for Italian LLMs, covering over 20 tasks across language understanding, reasoning, translation, and more. We propose a novel rolling community-driven evaluation paradigm, coupled with a unified assessment framework and a fine-grained metric taxonomy. An automated, heterogeneous data integration pipeline enables multi-task, multi-metric evaluation.
Contribution/Results: Leveraging open-weight models, we conduct the first comprehensive capability analysis of four major Italian LLMs, revealing previously undocumented cross-task performance disparities. All benchmark datasets, evaluation tooling, and results are fully open-sourced, establishing a reproducible, sustainable, and openly shared methodological foundation for evaluating low-resource language models.
📝 Abstract
The rapid progress of Large Language Models (LLMs) has transformed natural language processing and broadened its impact across research and society. Yet, systematic evaluation of these models, especially for languages beyond English, remains limited. "Challenging the Abilities of LAnguage Models in ITAlian" (CALAMITA) is a large-scale collaborative benchmarking initiative for Italian, coordinated under the Italian Association for Computational Linguistics. Unlike existing efforts that focus on leaderboards, CALAMITA foregrounds methodology: it federates more than 80 contributors from academia, industry, and the public sector to design, document, and evaluate a diverse collection of tasks, covering linguistic competence, commonsense reasoning, factual consistency, fairness, summarization, translation, and code generation. Through this process, we not only assembled a benchmark of over 20 tasks and almost 100 subtasks, but also established a centralized evaluation pipeline that supports heterogeneous datasets and metrics. We report results for four open-weight LLMs, highlighting systematic strengths and weaknesses across abilities, as well as challenges in task-specific evaluation. Beyond quantitative results, CALAMITA exposes methodological lessons: the necessity of fine-grained, task-representative metrics, the importance of harmonized pipelines, and the benefits and limitations of broad community engagement. CALAMITA is conceived as a rolling benchmark, enabling continuous integration of new tasks and models. This makes it both a resource -- the most comprehensive and diverse benchmark for Italian to date -- and a framework for sustainable, community-driven evaluation. We argue that this combination offers a blueprint for other languages and communities seeking inclusive and rigorous LLM evaluation practices.