Optimizing Sequential Multi-Step Tasks with Parallel LLM Agents

πŸ“… 2025-07-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address high latency and low task completion rates in LLM-based multi-agent systems caused by serial reasoning in complex, multi-step tasks, this paper proposes M1-Parallelβ€”a framework enabling parallel exploration of diverse solution paths via event-driven asynchronous communication among multiple LLM agent teams, followed by result fusion for enhanced robustness. Its key innovation lies in modeling plan diversity as a parallel search space, integrated with lightweight scheduling and asynchronous message passing to achieve end-to-end acceleration without compromising accuracy. Experimental evaluation on representative complex tasks demonstrates that M1-Parallel achieves up to 2.2Γ— speedup and a 37.5% improvement in task completion rate over state-of-the-art baselines, significantly outperforming both sequential and coarse-grained parallel approaches.

Technology Category

Application Category

πŸ“ Abstract
Large language model (LLM)-based multi-agent systems have demonstrated remarkable promise for tackling complex tasks by breaking them down into subtasks that are iteratively planned, executed, observed, and refined. Despite their effectiveness, these systems often incur high latency because real-world problems frequently demand multiple iterative cycles of reasoning steps. To address this challenge, we propose M1-Parallel, a framework that concurrently runs multiple multi-agent teams in parallel to uncover distinct solution paths. By leveraging an event-driven communication model with asynchronous messaging, M1-Parallel efficiently capitalizes on the inherent diversity of valid plans to either reduce end-to-end latency or boost task completion rates. Our experiments on complex tasks show that M1-Parallel with early termination achieves up to $2.2 imes$ speedup while preserving accuracy, and that M1-Parallel with aggregation yields higher task completion rates. We further investigate strategies aimed at encouraging diverse execution plans but observe no additional performance gains over repeated sampling. Overall, these findings underscore the potential of parallel plan execution for optimizing multi-agent systems for real-world, high-complexity reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

Reducing latency in multi-agent LLM systems
Enhancing task completion rates via parallel execution
Optimizing complex reasoning with diverse solution paths
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel multi-agent teams for diverse solutions
Event-driven async messaging for efficiency
Early termination and aggregation strategies
πŸ”Ž Similar Papers
No similar papers found.