🤖 AI Summary
In Retrieval-Augmented Generation (RAG) systems, opaque interactions between retrieval and generation components hinder knowledge provenance, factual fidelity, and interpretability.
Method: We propose the first evaluation framework supporting interactive, fine-grained cross-component analysis—integrating retrieval quality assessment, generation fidelity detection, and knowledge path tracing into a multi-level visual analytics system. Our architecture is co-designed via systematic literature review and expert interviews, moving beyond traditional isolated evaluation paradigms to enable traceable, end-to-end and component-level assessment.
Contribution/Results: Evaluated on real-world RAG deployments and through expert validation, our framework accurately identifies failure modes (e.g., retrieval–generation misalignment, hallucination propagation), facilitates domain-specific optimization, and significantly enhances RAG system reliability and explainability.
📝 Abstract
Retrieval-Augmented Generation (RAG) systems have emerged as a promising solution to enhance large language models (LLMs) by integrating external knowledge retrieval with generative capabilities. While significant advancements have been made in improving retrieval accuracy and response quality, a critical challenge remains that the internal knowledge integration and retrieval-generation interactions in RAG workflows are largely opaque. This paper introduces RAGTrace, an interactive evaluation system designed to analyze retrieval and generation dynamics in RAG-based workflows. Informed by a comprehensive literature review and expert interviews, the system supports a multi-level analysis approach, ranging from high-level performance evaluation to fine-grained examination of retrieval relevance, generation fidelity, and cross-component interactions. Unlike conventional evaluation practices that focus on isolated retrieval or generation quality assessments, RAGTrace enables an integrated exploration of retrieval-generation relationships, allowing users to trace knowledge sources and identify potential failure cases. The system's workflow allows users to build, evaluate, and iterate on retrieval processes tailored to their specific domains of interest. The effectiveness of the system is demonstrated through case studies and expert evaluations on real-world RAG applications.