Multi-Agent Tool-Integrated Policy Optimization

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address context-length limitations and noisy tool responses in multi-turn tool-integration tasks using large language models (LLMs), this paper proposes a lightweight multi-role collaboration framework that dynamically partitions a single LLM instance into distinct “Planner” and “Executor” roles—eliminating the memory overhead of multi-model deployment. We introduce a novel cross-role credit assignment mechanism, integrating role-specific prompting with reinforcement learning for end-to-end post-training, enabling joint optimization of planning and execution. Evaluated on GAIA-text, WebWalkerQA, and FRAMES benchmarks, our method achieves an average relative performance gain of 18.38%, significantly improving robustness to noisy tool outputs. To the best of our knowledge, this is the first work to realize efficient, end-to-end reinforcement training of multiple collaborative roles within a single LLM.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) increasingly rely on multi-turn tool-integrated planning for knowledge-intensive and complex reasoning tasks. Existing implementations typically rely on a single agent, but they suffer from limited context length and noisy tool responses. A natural solution is to adopt a multi-agent framework with planner- and worker-agents to manage context. However, no existing methods support effective reinforcement learning post-training of tool-integrated multi-agent frameworks. To address this gap, we propose Multi-Agent Tool-Integrated Policy Optimization (MATPO), which enables distinct roles (planner and worker) to be trained within a single LLM instance using role-specific prompts via reinforcement learning. MATPO is derived from a principled credit assignment mechanism across planner and worker rollouts. This design eliminates the need to deploy multiple LLMs, which would be memory-intensive, while preserving the benefits of specialization. Experiments on GAIA-text, WebWalkerQA, and FRAMES show that MATPO consistently outperforms single-agent baselines by an average of 18.38% relative improvement in performance and exhibits greater robustness to noisy tool outputs. Our findings highlight the effectiveness of unifying multiple agent roles within a single LLM and provide practical insights for stable and efficient multi-agent RL training.
Problem

Research questions and friction points this paper is trying to address.

Addresses multi-agent reinforcement learning for tool-integrated LLM frameworks
Solves context limitations and noisy tool responses in complex reasoning tasks
Enables specialized planner-worker training within a single LLM instance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework with planner and worker roles
Single LLM instance trained via reinforcement learning
Role-specific prompts enable specialization without multiple models
🔎 Similar Papers
No similar papers found.