🤖 AI Summary
Current retrieval-augmented language models often rely on single-perspective reasoning and outcome-oriented rewards, limiting their capacity for deep, self-correcting multi-step inference. This work proposes Adversarial Reasoning RAG (ARR), a novel framework that integrates adversarial and collaborative dynamics within multi-perspective reasoning. Specifically, a Reasoner and a Verifier interactively reason over retrieved evidence, guided by a process-aware reward mechanism that combines observational signals with model uncertainty to steer optimization. Evaluated across multiple benchmarks, the proposed approach significantly improves both reasoning accuracy and robustness, demonstrating the effectiveness of multi-perspective adversarial collaboration in enhancing complex reasoning capabilities.
📝 Abstract
Recent advances in synergizing large reasoning models (LRMs) with retrieval-augmented generation (RAG) have shown promising results, yet two critical challenges remain: (1) reasoning models typically operate from a single, unchallenged perspective, limiting their ability to conduct deep, self-correcting reasoning over external documents, and (2) existing training paradigms rely excessively on outcome-oriented rewards, which provide insufficient signal for shaping the complex, multi-step reasoning process. To address these issues, we propose an Reasoner-Verifier framework named Adversarial Reasoning RAG (ARR). The Reasoner and Verifier engage in reasoning on retrieved evidence and critiquing each other's logic while being guided by process-aware advantage that requires no external scoring model. This reward combines explicit observational signals with internal model uncertainty to jointly optimize reasoning fidelity and verification rigor. Experiments on multiple benchmarks demonstrate the effectiveness of our method.