🤖 AI Summary
Existing NLI-based citation evaluation methods support only coarse-grained (binary or ternary) judgments of support, failing to capture fine-grained citation quality across diverse contexts—including user queries, generated text, cited sources, and retrieval context.
Method: We propose the first principle-driven, fine-grained citation evaluation framework integrating all four contextual dimensions (query–generation–source–context); construct CiteBench, the first high-quality, multi-domain, human-annotated benchmark; and design CiteEval-Auto, an automated metric suite aligned with human judgment and cognitive principles, incorporating retrieval-context modeling and multidimensional citation principles.
Results: Experiments show CiteEval-Auto achieves over 35% higher correlation with human evaluations than prior NLI-based metrics, while offering superior scalability, interpretability, and fidelity.
📝 Abstract
Citation quality is crucial in information-seeking systems, directly influencing trust and the effectiveness of information access. Current evaluation frameworks, both human and automatic, mainly rely on Natural Language Inference (NLI) to assess binary or ternary supportiveness from cited sources, which we argue is a suboptimal proxy for citation evaluation. In this work we introduce CiteEval, a citation evaluation framework driven by principles focusing on fine-grained citation assessment within a broad context, encompassing not only the cited sources but the full retrieval context, user query, and generated text. Guided by the proposed framework, we construct CiteBench, a multi-domain benchmark with high-quality human annotations on citation quality. To enable efficient evaluation, we further develop CiteEval-Auto, a suite of model-based metrics that exhibit strong correlation with human judgments. Experiments across diverse systems demonstrate CiteEval-Auto's superior ability to capture the multifaceted nature of citations compared to existing metrics, offering a principled and scalable approach to evaluate and improve model-generated citations.