ErrEval: Error-Aware Evaluation for Question Generation through Explicit Diagnostics

📅 2026-01-15
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods for question answering generation struggle to explicitly model critical errors such as factual hallucination and answer mismatch, often leading to overestimation of output quality. This work proposes ErrEval, a novel framework that introduces explicit error diagnosis into the evaluation pipeline for the first time. ErrEval employs lightweight, plug-in error detectors to identify structural, linguistic, and content-related errors, then leverages these diagnostic results as interpretable evidence to guide large language models in producing more accurate quality scores. Evaluated on three benchmark datasets, the approach significantly improves alignment with human judgments, effectively mitigates over-scoring of low-quality outputs, and establishes a new paradigm for fine-grained, interpretable, and human-aligned evaluation.

Technology Category

Application Category

📝 Abstract
Automatic Question Generation (QG) often produces outputs with critical defects, such as factual hallucinations and answer mismatches. However, existing evaluation methods, including LLM-based evaluators, mainly adopt a black-box and holistic paradigm without explicit error modeling, leading to the neglect of such defects and overestimation of question quality. To address this issue, we propose ErrEval, a flexible and Error-aware Evaluation framework that enhances QG evaluation through explicit error diagnostics. Specifically, ErrEval reformulates evaluation as a two-stage process of error diagnosis followed by informed scoring. At the first stage, a lightweight plug-and-play Error Identifier detects and categorizes common errors across structural, linguistic, and content-related aspects. These diagnostic signals are then incorporated as explicit evidence to guide LLM evaluators toward more fine-grained and grounded judgments. Extensive experiments on three benchmarks demonstrate the effectiveness of ErrEval, showing that incorporating explicit diagnostics improves alignment with human judgments. Further analyses confirm that ErrEval effectively mitigates the overestimation of low-quality questions.
Problem

Research questions and friction points this paper is trying to address.

Question Generation
Evaluation
Error Diagnosis
Hallucination
Answer Mismatch
Innovation

Methods, ideas, or system contributions that make the work stand out.

Error-aware evaluation
Explicit error diagnostics
Question generation
LLM-based evaluation
Error identification
🔎 Similar Papers
No similar papers found.
Weiping Fu
Weiping Fu
PhD student of Xi'an Jiaotong University
LLMEvaluation
B
Bifan Wei
School of Continuing Education, Xi’an Jiaotong University, Xi’an, China
J
Jingyi Hao
School of Computer Science and Technology, Xi’an Jiaotong University, Xi’an, China
Yushun Zhang
Yushun Zhang
The Chinese University of Hong Kong, Shenzhen, China
OptimizationDeep learning
Jian Zhang
Jian Zhang
Xi'an Jiaotong University | Nanyang Technological University
Natural Language ProcessingLarge Language ModelsEvent Graph
Jiaxin Wang
Jiaxin Wang
Anhui University of Science and Technology
deep learning semi-supervised learning
Bo Li
Bo Li
Xi'an Jiaotong University
Y
Yu He
School of Computer Science and Technology, Xi’an Jiaotong University, Xi’an, China
Lingling Zhang
Lingling Zhang
Assistant Professor, Xi'an Jiaotong University
Computer visionFew-shot learningZero-shot learning
J
Jun Liu
School of Computer Science and Technology, Xi’an Jiaotong University, Xi’an, China