Neural Models and Language Model Prompting for the Multidimensional Evaluation of Open-Ended Conversations

📅 2025-08-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of multi-dimensional automatic evaluation of generative AI dialogue systems under parameter constraints (<13B), targeting dialogue-level, fine-grained score prediction. We propose a synergistic evaluation framework integrating language model (LM) prompting with lightweight encoder-based classification/regression models: a small LM generates structured feedback to augment feature representations, which then guide an efficient encoder to model scores across dimensions (e.g., coherence, informativeness). Our key contribution is the empirical identification of test-set distribution shift as a critical factor degrading evaluation performance, alongside validation of the prompt-encoder joint paradigm’s efficacy under strict parameter budgets. Experiments show our method achieves second place in overall test-set ranking and significantly outperforms baselines in human correlation across multiple dimensions on the validation set. The approach establishes a novel, resource-efficient pathway toward trustworthy dialogue evaluation in parameter-constrained settings.

Technology Category

Application Category

📝 Abstract
The growing number of generative AI-based dialogue systems has made their evaluation a crucial challenge. This paper presents our contribution to this important problem through the Dialogue System Technology Challenge (DSTC-12, Track 1), where we developed models to predict dialogue-level, dimension-specific scores. Given the constraint of using relatively small models (i.e. fewer than 13 billion parameters) our work follows two main strategies: employing Language Models (LMs) as evaluators through prompting, and training encoder-based classification and regression models. Our results show that while LM prompting achieves only modest correlations with human judgments, it still ranks second on the test set, outperformed only by the baseline. The regression and classification models, with significantly fewer parameters, demonstrate high correlation for some dimensions on the validation set. Although their performance decreases on the test set, it is important to note that the test set contains annotations with significantly different score ranges for some of the dimensions with respect to the train and validation sets.
Problem

Research questions and friction points this paper is trying to address.

Evaluating open-ended conversations with generative AI systems
Predicting dialogue-level dimension-specific scores efficiently
Using small models and prompting for human judgment correlation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Language Model prompting for evaluation
Encoder-based classification models training
Regression models with fewer parameters
🔎 Similar Papers
No similar papers found.
M
Michelle Elizabeth
Orange Research, Aix-Marseille University
A
Alicja Kasicka
Orange Research
N
Natalia Krawczyk
Orange Research
Magalie Ochs
Magalie Ochs
TALEP, LIS, Aix Marseille Université
Affective ComputingSocial Signal Processing
Gwénolé Lecorvé
Gwénolé Lecorvé
Orange
Natural Language ProcessingLanguage ModelingQuestion Answering
J
Justyna Gromada
Orange Research
L
Lina M. Rojas-Barahona
Orange Research