🤖 AI Summary
This work proposes Self-Debate Reinforcement Learning (SDRL), a novel framework that internalizes multi-agent debate dynamics within a single language model during training. While existing reinforcement learning approaches yield models with strong individual reasoning capabilities, they do not explicitly optimize for collaborative integration of diverse reasoning paths in debate settings. SDRL addresses this gap by sampling multiple candidate solutions to construct self-generated debate contexts and jointly optimizing both the initial response and a refined answer conditioned on this context. By integrating verifiable rewards, multi-path reasoning sampling, and conditional generation strategies, the method significantly enhances model performance in debate scenarios across multiple base models and reasoning benchmarks, while simultaneously improving standalone reasoning ability.
📝 Abstract
The reasoning abilities of large language models (LLMs) have been substantially improved by reinforcement learning with verifiable rewards (RLVR). At test time, collaborative reasoning through Multi-Agent Debate (MAD) has emerged as a promising approach for enhancing LLM performance. However, current RLVR methods typically train LLMs to solve problems in isolation, without explicitly preparing them to synthesize and benefit from different rationales that arise during debate. In this work, we propose Self-Debate Reinforcement Learning (SDRL), a training framework that equips a single LLM with strong standalone problem-solving ability and the capability to learn from diverse reasoning trajectories in MAD. Given a prompt, SDRL first samples multiple candidate solutions, then constructs a debate context with diverse reasoning paths and generates second-turn responses conditioned on this context. Finally, SDRL jointly optimizes both the initial and debate-conditioned responses, yielding a model that is effective as both a standalone solver and a debate participant. Experiments across multiple base models and reasoning benchmarks show that SDRL improves overall MAD performance while simultaneously strengthening single model reasoning.