DRAFT-RL: Multi-Agent Chain-of-Draft Reasoning for Reinforcement Learning-Enhanced LLMs

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RL-based multi-agent reflection frameworks rely on single-response generation by LLM agents, lacking structural diversity in reasoning paths. To address this, we propose DRAFT-RL: a novel framework integrating multi-draft Chain-of-Draft reasoning with reinforcement learning. Its core innovations include (i) multi-agent collaborative generation of diverse reasoning drafts, (ii) a peer-review mechanism for dynamic draft selection, and (iii) a learnable reward model that drives actor-critic-style policy optimization—enabling interpretable and robust self-evolution of reasoning. Evaluated on code generation, symbolic mathematics, and knowledge-intensive question answering, DRAFT-RL achieves significant improvements: +5.2% average accuracy gain and 1.8× faster training convergence, outperforming state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown impressive capabilities in multi-step reasoning and problem-solving.Recent works introduce multi-agent reflection frameworks where multiple LLM agents critique and refine each other's outputs using reinforcement learning (RL). However, these approaches often rely on single-shot responses and lack structural diversity in reasoning exploration. In this paper, we propose DRAFT-RL, a novel framework that integrates Chain-of-Draft (CoD) reasoning into multi-agent RL training. Instead of generating single responses, each agent produces multiple drafts per query, which are then evaluated by peer agents and a learned reward model to identify the most promising trajectory. These selected drafts are used to refine future reasoning strategies through actor-critic learning.DRAFT-RL enables explicit multi-path exploration, peer-guided reflection, and reward-aligned selection, resulting in more robust and interpretable LLM agent behavior. We evaluate our method on complex reasoning tasks including code synthesis, symbolic math, and knowledge-intensive QA,demonstrating that DRAFT-RL outperforms existing reflective and RL-based agents by significant margins in both accuracy and convergence speed
Problem

Research questions and friction points this paper is trying to address.

Addresses single-shot response limitations in multi-agent frameworks
Enhances reasoning diversity through multi-draft chain exploration
Improves reinforcement learning alignment with peer-guided reflection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent chain-of-draft reasoning for RL-enhanced LLMs
Generates multiple drafts per query for exploration
Uses peer evaluation and reward model for selection
🔎 Similar Papers
No similar papers found.
Y
Yuanhao Li
BUPT
M
Mingshan Liu
HKUST(GZ)
H
Hongbo Wang
BUPT
Y
Yiding Zhang
BUPT
Yifei Ma
Yifei Ma
Applied Scientist, Amazon.Com
recommender systemsbayesian optimizationbanditcontrol
W
Wei Tan
University of Bristol