🤖 AI Summary
Existing RAG evaluation metrics struggle to capture semantic reliability and subtle deviations in LLM-generated content. To address this, we propose KG-RAGEval—a knowledge graph–based fine-grained evaluation framework that jointly models relevance, factual consistency, and semantic drift by integrating knowledge graph construction, multi-hop reasoning path modeling, and semantic community clustering. Compared to baselines such as RAGAS, KG-RAGEval significantly enhances discriminative capability for nuanced semantic discrepancies, achieving an average 0.23 improvement in Spearman correlation (ρ) with human judgments across multiple RAG benchmarks. It particularly excels at detecting hallucinations and context misalignment. Extensive experiments with large-scale human annotations validate both the effectiveness and interpretability of the proposed metric.
📝 Abstract
Large language models (LLMs) has become a significant research focus and is utilized in various fields, such as text generation and dialog systems. One of the most essential applications of LLM is Retrieval Augmented Generation (RAG), which greatly enhances generated content's reliability and relevance. However, evaluating RAG systems remains a challenging task. Traditional evaluation metrics struggle to effectively capture the key features of modern LLM-generated content that often exhibits high fluency and naturalness. Inspired by the RAGAS tool, a well-known RAG evaluation framework, we extended this framework into a KG-based evaluation paradigm, enabling multi-hop reasoning and semantic community clustering to derive more comprehensive scoring metrics. By incorporating these comprehensive evaluation criteria, we gain a deeper understanding of RAG systems and a more nuanced perspective on their performance. To validate the effectiveness of our approach, we compare its performance with RAGAS scores and construct a human-annotated subset to assess the correlation between human judgments and automated metrics. In addition, we conduct targeted experiments to demonstrate that our KG-based evaluation method is more sensitive to subtle semantic differences in generated outputs. Finally, we discuss the key challenges in evaluating RAG systems and highlight potential directions for future research.