🤖 AI Summary
Existing evaluation paradigms for large language models (LLMs) suffer from overreliance on reference texts and closed-ended multiple-choice formats, limiting scalability and applicability across diverse generative tasks. Method: This work introduces the first reference-free automatic evaluation framework targeting multiple generative tasks—dialogue generation, text expansion, summarization, and counterfactual question answering. It constructs a diverse task suite, designs reference-free multidimensional metrics, and establishes a cross-team benchmarking protocol with standardized datasets. Contribution/Results: Four teams submitted 48 systems, evaluated reproducibly under a unified benchmark—yielding the first comprehensive, reference-free LLM evaluation benchmark covering broad generative capabilities. The framework significantly enhances evaluation applicability, fairness, and openness, offering a novel paradigm for LLM assessment beyond reference-dependent and narrow-task benchmarks.
📝 Abstract
In this paper, we provide an overview of the NTCIR-18 Automatic Evaluation of LLMs (AEOLLM) task. As large language models (LLMs) grow popular in both academia and industry, how to effectively evaluate the capacity of LLMs becomes an increasingly critical but still challenging issue. Existing methods can be divided into two types: manual evaluation, which is expensive, and automatic evaluation, which faces many limitations including task format (the majority belong to multiple-choice questions) and evaluation criteria (occupied by reference-based metrics). To advance the innovation of automatic evaluation, we propose the AEOLLM task which focuses on generative tasks and encourages reference-free methods. Besides, we set up diverse subtasks such as dialogue generation, text expansion, summary generation and non-factoid question answering to comprehensively test different methods. This year, we received 48 runs from 4 teams in total. This paper will describe the background of the task, the data set, the evaluation measures and the evaluation results, respectively.