Dynamic Generation of Multi-LLM Agents Communication Topologies with Graph Diffusion Models

๐Ÿ“… 2025-10-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In multi-agent systems, the communication topology among LLM-based agents must dynamically balance task performance, communication overhead, and robustnessโ€”yet existing static or handcrafted topologies fail to adapt to varying task complexity. To address this, we propose the **Guided Topology Diffusion (GTD)** framework, which models topology generation as a multi-objective reward-driven iterative discrete graph diffusion process. GTD employs a conditional graph diffusion model coupled with a lightweight agent model, enabling gradient-free, real-time, and adaptive sparse topology evolution. Evaluated across multiple benchmark tasks, GTD reduces token consumption by 32% on average while improving performance on complex collaborative tasks by 18.7%. To our knowledge, GTD is the first method to achieve task-adaptive, efficient, and robust communication structure synthesis for LLM-based multi-agent systems.

Technology Category

Application Category

๐Ÿ“ Abstract
The efficiency of multi-agent systems driven by large language models (LLMs) largely hinges on their communication topology. However, designing an optimal topology is a non-trivial challenge, as it requires balancing competing objectives such as task performance, communication cost, and robustness. Existing frameworks often rely on static or hand-crafted topologies, which inherently fail to adapt to diverse task requirements, leading to either excessive token consumption for simple problems or performance bottlenecks for complex ones. To address this challenge, we introduce a novel generative framework called extit{Guided Topology Diffusion (GTD)}. Inspired by conditional discrete graph diffusion models, GTD formulates topology synthesis as an iterative construction process. At each step, the generation is steered by a lightweight proxy model that predicts multi-objective rewards (e.g., accuracy, utility, cost), enabling real-time, gradient-free optimization towards task-adaptive topologies. This iterative, guided synthesis process distinguishes GTD from single-step generative frameworks, enabling it to better navigate complex design trade-offs. We validated GTD across multiple benchmarks, and experiments show that this framework can generate highly task-adaptive, sparse, and efficient communication topologies, significantly outperforming existing methods in LLM agent collaboration.
Problem

Research questions and friction points this paper is trying to address.

Optimizing multi-agent communication topologies for task performance
Balancing competing objectives like cost, robustness, and efficiency
Overcoming limitations of static topologies with adaptive generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates communication topologies using graph diffusion models
Employs lightweight proxy model for multi-objective reward guidance
Iteratively constructs task-adaptive topologies via gradient-free optimization
๐Ÿ”Ž Similar Papers
2024-02-16arXiv.orgCitations: 16
E
Eric Hanchen Jiang
University of California, Los Angeles
Guancheng Wan
Guancheng Wan
Computer Science, UCLA
AI AgentAI4ScienceLarge Language ModelTrustworthy AI
S
Sophia Yin
University of California, Los Angeles
M
Mengting Li
University of California, Los Angeles
Y
Yuchen Wu
University of Washington
X
Xiao Liang
University of California, Los Angeles
X
Xinfeng Li
Nanyang Technological University
Yizhou Sun
Yizhou Sun
Professor, Computer Science, UCLA
Information NetworksKnowledge GraphsGraph Neural NetworksData MiningMachine Learning
W
Wei Wang
University of California, Los Angeles
K
Kai-Wei Chang
University of California, Los Angeles
Ying Nian Wu
Ying Nian Wu
UCLA Department of Statistics and Data Science
Generative AIRepresentation learningComputer visionComputational neuroscienceBioinformatics