A Third Paradigm for LLM Evaluation: Dialogue Game-Based Evaluation using clembench

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluation paradigms suffer from a binary trade-off: reference-based methods offer controllability but lack ecological validity, whereas preference-based methods capture real-world interaction yet suffer from poor reproducibility and limited control. To bridge this gap, we propose “Dialogue Games,” a novel evaluation paradigm that systematically unifies task controllability with interactive authenticity—enabling multi-turn, reference-free, goal-directed, and reproducible assessment. Methodologically, we design a structured dialogue game mechanism integrating customizable task instances and an automated scoring framework. Our primary contribution is Clembench, an open-source, extensible benchmarking platform that supports flexible scenario customization. Empirical evaluation on English-language benchmarks demonstrates its effectiveness, while personalized test generation significantly enhances the diversity and practical utility of the evaluation ecosystem.

Technology Category

Application Category

📝 Abstract
There are currently two main paradigms for evaluating large language models (LLMs), reference-based evaluation and preference-based evaluation. The first, carried over from the evaluation of machine learning models in general, relies on pre-defined task instances, for which reference task executions are available. The second, best exemplified by the LM-arena, relies on (often self-selected) users bringing their own intents to a site that routes these to several models in parallel, among whose responses the user then selects their most preferred one. The former paradigm hence excels at control over what is tested, while the latter comes with higher ecological validity, testing actual use cases interactively. Recently, a third complementary paradigm has emerged that combines some of the strengths of these approaches, offering control over multi-turn, reference-free, repeatable interactions, while stressing goal-directedness: dialogue game based evaluation. While the utility of this approach has been shown by several projects, its adoption has been held back by the lack of a mature, easily re-usable implementation. In this paper, we present clembench, which has been in continuous development since 2023 and has in its latest release been optimized for ease of general use. We describe how it can be used to benchmark one's own models (using a provided set of benchmark game instances in English), as well as how easily the benchmark itself can be extended with new, tailor-made targeted tests.
Problem

Research questions and friction points this paper is trying to address.

Develops dialogue game-based LLM evaluation paradigm
Combines control and ecological validity in testing
Provides reusable clembench tool for model benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dialogue game-based evaluation paradigm
Reference-free multi-turn interactions
Extensible benchmark with tailored tests
🔎 Similar Papers
No similar papers found.