Learning to Orchestrate Agents in Natural Language with the Conductor

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of dynamically integrating model expertise in multi-LLM collaborative reasoning. We propose Conductor, a reinforcement learning (RL)-based framework that trains a 7B-parameter orchestrator model to learn collaboration policies across heterogeneous open- and closed-source LLM agents in an end-to-end manner. Our key contribution is the first RL-driven automatic discovery of inter-model collaboration patterns, enabling dynamic communication topology construction, recursive task decomposition, and online adaptive scheduling. The method integrates prompt engineering, topology optimization, and iterative RL training to substantially enhance the collective reasoning capability of multi-agent systems. Evaluated on high-difficulty benchmarks—including LiveCodeBench and GPQA—Conductor achieves state-of-the-art performance, outperforming both monolithic models and existing collaborative approaches. It demonstrates strong generalization and compositional flexibility across diverse agent configurations and tasks.

Technology Category

Application Category

📝 Abstract
Powerful large language models (LLMs) from different providers have been expensively trained and finetuned to specialize across varying domains. In this work, we introduce a new kind of Conductor model trained with reinforcement learning to automatically discover powerful coordination strategies among LLMs. Our Conductor learns not only to design targeted communication topologies for effective agent-to-agent collaboration, but also to prompt engineer focused instructions to the LLMs to maximally leverage their individual capabilities. We show that, by learning optimal coordination strategies over pools of powerful worker LLMs, a 7B Conductor achieves significant performance gains beyond any individual worker, attaining state-of-the-art results in challenging reasoning benchmarks, such as LiveCodeBench and GPQA. By training with randomized agent pools, our conductor effectively adapts to arbitrary sets of open- and closed-source agents, meeting any user requirements. Furthermore, allowing the Conductor to select itself as a worker gives rise to recursive topologies, elevating performance with a new form of dynamic test-time scaling through online iterative adaptation. More broadly, ours is among the early work demonstrating language model coordination can be unlocked through RL, where powerful coordination strategies emerge naturally in LLMs through pure end-to-end reward maximization.
Problem

Research questions and friction points this paper is trying to address.

Learning to coordinate multiple specialized LLMs automatically
Designing communication topologies and prompts for agent collaboration
Adapting to diverse agent pools and enabling recursive topologies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning trains Conductor to coordinate LLMs
Conductor designs communication topologies and prompts for agents
Adapts to arbitrary agent pools and enables recursive topologies
🔎 Similar Papers
No similar papers found.
Stefan Nielsen
Stefan Nielsen
FPT AI
Machine LearningDeep Learning
Edoardo Cetin
Edoardo Cetin
Sakana AI
machine learningreinforcement learningunsupervised learning
P
Peter Schwendeman
University of Michigan, USA
Q
Qi Sun
Sakana AI, Japan; Institute of Science Tokyo, Japan
J
Jinglue Xu
Sakana AI, Japan
Y
Yujin Tang
Sakana AI, Japan