Overview of the NTCIR-18 Automatic Evaluation of LLMs (AEOLLM) Task

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation paradigms for large language models (LLMs) suffer from overreliance on reference texts and closed-ended multiple-choice formats, limiting scalability and applicability across diverse generative tasks. Method: This work introduces the first reference-free automatic evaluation framework targeting multiple generative tasks—dialogue generation, text expansion, summarization, and counterfactual question answering. It constructs a diverse task suite, designs reference-free multidimensional metrics, and establishes a cross-team benchmarking protocol with standardized datasets. Contribution/Results: Four teams submitted 48 systems, evaluated reproducibly under a unified benchmark—yielding the first comprehensive, reference-free LLM evaluation benchmark covering broad generative capabilities. The framework significantly enhances evaluation applicability, fairness, and openness, offering a novel paradigm for LLM assessment beyond reference-dependent and narrow-task benchmarks.

Technology Category

Application Category

📝 Abstract
In this paper, we provide an overview of the NTCIR-18 Automatic Evaluation of LLMs (AEOLLM) task. As large language models (LLMs) grow popular in both academia and industry, how to effectively evaluate the capacity of LLMs becomes an increasingly critical but still challenging issue. Existing methods can be divided into two types: manual evaluation, which is expensive, and automatic evaluation, which faces many limitations including task format (the majority belong to multiple-choice questions) and evaluation criteria (occupied by reference-based metrics). To advance the innovation of automatic evaluation, we propose the AEOLLM task which focuses on generative tasks and encourages reference-free methods. Besides, we set up diverse subtasks such as dialogue generation, text expansion, summary generation and non-factoid question answering to comprehensively test different methods. This year, we received 48 runs from 4 teams in total. This paper will describe the background of the task, the data set, the evaluation measures and the evaluation results, respectively.
Problem

Research questions and friction points this paper is trying to address.

Effective evaluation of large language models (LLMs) is critical but challenging.
Existing methods are either expensive (manual) or limited (automatic).
Proposed AEOLLM task focuses on generative tasks and reference-free evaluation methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Focuses on generative tasks evaluation
Encourages reference-free evaluation methods
Includes diverse subtasks for comprehensive testing
🔎 Similar Papers
No similar papers found.
J
Junjie Chen
DCST, Tsinghua University, Quan Cheng Laboratory, Beijing 100084, China
H
Haitao Li
DCST, Tsinghua University, Quan Cheng Laboratory, Beijing 100084, China
Zhumin Chu
Zhumin Chu
PhD. student, Tsinghua University
information retrievaluser studyevaluation
Y
Yiqun Liu
DCST, Tsinghua University, Zhongguancun Laboratory, Beijing 100084, China
Qingyao Ai
Qingyao Ai
Associate Professor, Dept. of CS&T, Tsinghua University
Information RetrievalMachine Learning