🤖 AI Summary
Existing evaluation methods for question answering generation struggle to explicitly model critical errors such as factual hallucination and answer mismatch, often leading to overestimation of output quality. This work proposes ErrEval, a novel framework that introduces explicit error diagnosis into the evaluation pipeline for the first time. ErrEval employs lightweight, plug-in error detectors to identify structural, linguistic, and content-related errors, then leverages these diagnostic results as interpretable evidence to guide large language models in producing more accurate quality scores. Evaluated on three benchmark datasets, the approach significantly improves alignment with human judgments, effectively mitigates over-scoring of low-quality outputs, and establishes a new paradigm for fine-grained, interpretable, and human-aligned evaluation.
📝 Abstract
Automatic Question Generation (QG) often produces outputs with critical defects, such as factual hallucinations and answer mismatches. However, existing evaluation methods, including LLM-based evaluators, mainly adopt a black-box and holistic paradigm without explicit error modeling, leading to the neglect of such defects and overestimation of question quality. To address this issue, we propose ErrEval, a flexible and Error-aware Evaluation framework that enhances QG evaluation through explicit error diagnostics. Specifically, ErrEval reformulates evaluation as a two-stage process of error diagnosis followed by informed scoring. At the first stage, a lightweight plug-and-play Error Identifier detects and categorizes common errors across structural, linguistic, and content-related aspects. These diagnostic signals are then incorporated as explicit evidence to guide LLM evaluators toward more fine-grained and grounded judgments. Extensive experiments on three benchmarks demonstrate the effectiveness of ErrEval, showing that incorporating explicit diagnostics improves alignment with human judgments. Further analyses confirm that ErrEval effectively mitigates the overestimation of low-quality questions.