🤖 AI Summary
This study addresses the gap in existing retrieval-augmented generation (RAG) research, which has predominantly focused on single-turn question answering and lacks systematic evaluation in multi-turn dialogue settings. Under a unified experimental framework, the authors present the first comprehensive comparison of multiple RAG approaches—including vanilla RAG, reranking, and hybrid strategies combining BM25 with HyDE—across eight multi-domain conversational QA datasets, jointly evaluating both retrieval and generation performance. The findings reveal that methodological complexity is not decisive; rather, the alignment between retrieval strategy and dataset structure is more critical. Simple yet robust methods such as reranking and hybrid BM25–HyDE consistently outperform vanilla RAG, while some advanced techniques even underperform a no-RAG baseline. Performance is also significantly influenced by dataset characteristics and dialogue length.
📝 Abstract
Conversational question answering increasingly relies on retrieval-augmented generation (RAG) to ground large language models (LLMs) in external knowledge. Yet, most existing studies evaluate RAG methods in isolation and primarily focus on single-turn settings. This paper addresses the lack of a systematic comparison of RAG methods for multi-turn conversational QA, where dialogue history, coreference, and shifting user intent substantially complicate retrieval. We present a comprehensive empirical study of vanilla and advanced RAG methods across eight diverse conversational QA datasets spanning multiple domains. Using a unified experimental setup, we evaluate retrieval quality and answer generation using generator and retrieval metrics, and analyze how performance evolves across conversation turns. Our results show that robust yet straightforward methods, such as reranking, hybrid BM25, and HyDE, consistently outperform vanilla RAG. In contrast, several advanced techniques fail to yield gains and can even degrade performance below the No-RAG baseline. We further demonstrate that dataset characteristics and dialogue length strongly influence retrieval effectiveness, explaining why no single RAG strategy dominates across settings. Overall, our findings indicate that effective conversational RAG depends less on method complexity than on alignment between the retrieval strategy and the dataset structure. We publish the code used.\footnote{\href{https://github.com/Klejda-A/exp-rag.git}{GitHub Repository}}