🤖 AI Summary
Existing LLM-based multi-agent systems (MAS) rely on handcrafted, static communication topologies, lacking context awareness and adaptability to diverse industrial tasks. To address this, we propose the Dynamic Graph Designer (DGD), a framework that autonomously synthesizes optimal agent communication topologies grounded in input task semantics, enabling context-sensitive, adaptive collaboration. DGD integrates a lightweight LLM adaptation mechanism with a task-driven topology optimization strategy, supporting intelligent routing across heterogeneous scenarios—including question answering, mathematical reasoning, and code generation. Evaluated across multiple mainstream LLM architectures, our approach consistently outperforms both single-agent baselines and conventional MAS designs, achieving state-of-the-art (SOTA) results on several benchmark tasks. These results empirically validate that dynamic, task-aware topology design is critical for enhancing the generalization capability and task adaptability of LLM-based MAS.
📝 Abstract
Although large language models (LLMs) have revolutionized natural language processing capabilities, their practical implementation as autonomous multi-agent systems (MAS) for industrial problem-solving encounters persistent barriers. Conventional MAS architectures are fundamentally restricted by inflexible, hand-crafted graph topologies that lack contextual responsiveness, resulting in diminished efficacy across varied academic and commercial workloads. To surmount these constraints, we introduce AMAS, a paradigm-shifting framework that redefines LLM-based MAS through a novel dynamic graph designer. This component autonomously identifies task-specific optimal graph configurations via lightweight LLM adaptation, eliminating the reliance on monolithic, universally applied structural templates. Instead, AMAS exploits the intrinsic properties of individual inputs to intelligently direct query trajectories through task-optimized agent pathways. Rigorous validation across question answering, mathematical deduction, and code generation benchmarks confirms that AMAS systematically exceeds state-of-the-art single-agent and multi-agent approaches across diverse LLM architectures. Our investigation establishes that context-sensitive structural adaptability constitutes a foundational requirement for high-performance LLM MAS deployments.