Beyond LLM-as-a-Judge: Deterministic Metrics for Multilingual Generative Text Evaluation

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes OmniScore, a lightweight and deterministic evaluation framework based on a sub-1B-parameter model trained on 564,000 multilingual synthetic samples, addressing key limitations of large language models as automatic evaluators—namely high cost, prompt sensitivity, strong language dependence, and poor reproducibility. OmniScore supports multidimensional generation quality scoring under reference-only, source-only, or hybrid modes and establishes the first reproducible, scalable evaluation system covering 107 languages. Validated on 8,617 human-annotated samples across six languages, it demonstrates strong performance in question answering, machine translation, and summarization tasks, closely approximating the judgment behavior of large models while offering low latency and consistent cross-lingual evaluation, thereby presenting a new paradigm for efficient and scalable text generation assessment.
📝 Abstract
While Large Language Models (LLMs) are increasingly adopted as automated judges for evaluating generated text, their outputs are often costly, and highly sensitive to prompt design, language, and aggregation strategies, severely, which limits reproducibility. To address these challenges, we propose \textbf{\textit{OmniScore}}, a family of complementary, deterministic learned metrics developed using small size ($<$1B) parameter models. OmniScore approximates LLM-judge behavior while preserving the low latency and consistency of traditional model-based scoring. We trained the models large-scale synthetic supervision ($\sim$564k instances, in \textbf{107 languages}) and evaluated using 8,617 manually annotated instances. The OmniScore family supports reliable, multi-dimensional scores across a variety of settings, including reference-based, source-grounded, and hybrid evaluations. We evaluate these models across question answering (QA), translation, and summarization in \textbf{6 languages}. Our results demonstrate that lightweight, deterministic learned metrics provide a highly practical and scalable alternative to frontier LLMs. Our models and datasets can be found at https://huggingface.co/collections/QCRI/omniscore
Problem

Research questions and friction points this paper is trying to address.

LLM-as-a-Judge
multilingual evaluation
text generation evaluation
reproducibility
automated evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

deterministic metrics
multilingual evaluation
learned metrics
OmniScore
LLM-as-a-Judge
🔎 Similar Papers
No similar papers found.