🤖 AI Summary
To address the limitations of large language models (LLMs) in reasoning—namely, narrow feedback spaces and insufficient multi-agent collaborative training—this paper formalizes iterative reflection optimization as a Markov decision process and proposes a multi-agent collaborative framework for dynamic solution-space exploration and feedback integration. The core contribution is DPSDP, the first direct preference policy search algorithm supporting self-generated data, with theoretical guarantees: the learned policy dominates all policies within the training distribution. DPSDP integrates an Actor-Critic architecture, multi-agent reflective mechanisms, dynamic programming–guided policy search, and self-supervised preference learning. Evaluated on the MATH 500 benchmark, Ministral achieves a reasoning accuracy improvement from 58.2% to 63.2% after five reflection steps, demonstrating significant gains in out-of-distribution generalization and multi-agent coordination efficacy.
📝 Abstract
Leveraging more test-time computation has proven to be an effective way to boost the reasoning capabilities of large language models (LLMs). Among various methods, the verify-and-improve paradigm stands out for enabling dynamic solution exploration and feedback incorporation. However, existing approaches often suffer from restricted feedback spaces and lack of coordinated training of different parties, leading to suboptimal performance. To address this, we model this multi-turn refinement process as a Markov Decision Process and introduce DPSDP (Direct Policy Search by Dynamic Programming), a reinforcement learning algorithm that trains an actor-critic LLM system to iteratively refine answers via direct preference learning on self-generated data. Theoretically, DPSDP can match the performance of any policy within the training distribution. Empirically, we instantiate DPSDP with various base models and show improvements on both in- and out-of-distribution benchmarks. For example, on benchmark MATH 500, majority voting over five refinement steps increases first-turn accuracy from 58.2% to 63.2% with Ministral-based models. An ablation study further confirms the benefits of multi-agent collaboration and out-of-distribution generalization.