🤖 AI Summary
Existing multi-agent frameworks often rely on fixed agent roles or centralized control, limiting scalability and adaptability in long-horizon reasoning. To address this, we propose a swarm-intelligence-inspired, decentralized multi-agent reasoning framework featuring three complementary agent roles—Explorers, Workers, and Verifiers—organized in a closed-loop collaborative architecture. Our approach introduces decentralized dynamic collaboration: autonomous task allocation via embedding-based probabilistic matching, pheromone-like reinforcement mechanisms to guide cooperative convergence, and adaptive role configuration with event-driven execution. Crucially, the framework eliminates any global controller, enabling fully self-organized reasoning through LLM-based agents. Evaluated on symbolic reasoning, scientific literature synthesis, and scientific programming tasks, it achieves significantly higher accuracy and robustness than state-of-the-art baselines, empirically validating the efficacy of swarm-inspired coordination for complex, open-ended reasoning.
📝 Abstract
Large language model (LLM) agents have shown remarkable reasoning abilities. However, existing multi-agent frameworks often rely on fixed roles or centralized control, limiting scalability and adaptability in long-horizon reasoning. We introduce SwarmSys, a closed-loop framework for distributed multi-agent reasoning inspired by swarm intelligence. Coordination in SwarmSys emerges through iterative interactions among three specialized roles, Explorers, Workers, and Validators, that continuously cycle through exploration, exploitation, and validation. To enable scalable and adaptive collaboration, we integrate adaptive agent and event profiles, embedding-based probabilistic matching, and a pheromone-inspired reinforcement mechanism, supporting dynamic task allocation and self-organizing convergence without global supervision. Across symbolic reasoning, research synthesis, and scientific programming tasks, SwarmSys consistently outperforms baselines, improving both accuracy and reasoning stability. These findings highlight swarm-inspired coordination as a promising paradigm for scalable, robust, and adaptive multi-agent reasoning, suggesting that coordination scaling may rival model scaling in advancing LLM intelligence.