Improving Multilingual Retrieval-Augmented Language Models through Dialectic Reasoning Argumentations

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses knowledge conflicts arising from cross-lingual retrieval in multilingual RAG. We propose DRAG, the first RAG framework integrating dialectical reasoning. DRAG employs a modular argumentation architecture to achieve multilingual semantic alignment, fine-grained conflict detection and resolution, and lightweight contextual argument generation. Instead of simple retrieval fusion, it delivers structured argumentative explanations for critical knowledge integration. Compared to conventional approaches, DRAG significantly improves factual accuracy and robustness in multilingual RAG, enhances resilience against noisy and contradictory knowledge, and supports low-overhead deployment. Crucially, it provides a scalable, analytical reasoning paradigm tailored for resource-constrained language models.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) is key to enhancing large language models (LLMs) to systematically access richer factual knowledge. Yet, using RAG brings intrinsic challenges, as LLMs must deal with potentially conflicting knowledge, especially in multilingual retrieval, where the heterogeneity of knowledge retrieved may deliver different outlooks. To make RAG more analytical, critical and grounded, we introduce Dialectic-RAG (DRAG), a modular approach guided by Argumentative Explanations, i.e., structured reasoning process that systematically evaluates retrieved information by comparing, contrasting, and resolving conflicting perspectives. Given a query and a set of multilingual related documents, DRAG selects and exemplifies relevant knowledge for delivering dialectic explanations that, by critically weighing opposing arguments and filtering extraneous content, clearly determine the final response. Through a series of in-depth experiments, we show the impact of our framework both as an in-context learning strategy and for constructing demonstrations to instruct smaller models. The final results demonstrate that DRAG significantly improves RAG approaches, requiring low-impact computational effort and providing robustness to knowledge perturbations.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multilingual retrieval-augmented models with dialectic reasoning
Resolving conflicting knowledge in multilingual document retrieval
Improving robustness and efficiency in retrieval-augmented generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular DRAG approach with Argumentative Explanations
Compares and contrasts multilingual retrieved information
Filters extraneous content for robust responses
🔎 Similar Papers
No similar papers found.