🤖 AI Summary
To address the poor scalability of monolithic large language models (LLMs) on complex tasks and the low efficiency of multi-agent collaboration, this paper proposes the “Puppeteer” paradigm: a centralized orchestrator trained via reinforcement learning dynamically schedules heterogeneous agents in a task-state-driven, adaptive manner. Our key contributions are twofold: (1) we introduce the first evolvable dynamic orchestration mechanism, overcoming the limitations of static agent topologies; and (2) we empirically discover that compact, cyclic collective reasoning structures naturally emerge during orchestrator training—a phenomenon previously unobserved. Experiments demonstrate consistent and significant improvements over baselines in both closed- and open-source settings: task completion rates and reasoning efficiency increase concurrently, computational overhead is substantially reduced, and reasoning paths become markedly more compact.
📝 Abstract
Large language models (LLMs) have achieved remarkable results across diverse downstream tasks, but their monolithic nature restricts scalability and efficiency in complex problem-solving. While recent research explores multi-agent collaboration among LLMs, most approaches rely on static organizational structures that struggle to adapt as task complexity and agent numbers grow, resulting in coordination overhead and inefficiencies. To this end, we propose a puppeteer-style paradigm for LLM-based multi-agent collaboration, where a centralized orchestrator ("puppeteer") dynamically directs agents ("puppets") in response to evolving task states. This orchestrator is trained via reinforcement learning to adaptively sequence and prioritize agents, enabling flexible and evolvable collective reasoning. Experiments on closed- and open-domain scenarios show that this method achieves superior performance with reduced computational costs. Analyses further reveal that the key improvements consistently stem from the emergence of more compact, cyclic reasoning structures under the orchestrator's evolution.