🤖 AI Summary
This study addresses computational task allocation in multi-agent LLM systems, balancing cost, efficiency, and performance. Methodologically, it integrates an LLM-based meta-reasoning framework, hierarchical task decomposition, role-aware prompting, and concurrent multi-agent action modeling. Its key contributions are threefold: (1) the first systematic comparative analysis of LLMs acting as *planners* versus *orchestrators* in resource scheduling; (2) the novel explicit modeling of worker capabilities to enhance allocation robustness; and (3) empirical validation showing the planner paradigm significantly outperforms the orchestrator—improving both task throughput and agent utilization under concurrency. Experiments further demonstrate that explicit capability prompting boosts task assignment accuracy by 23.6% in suboptimal-worker scenarios. Collectively, the work advances principled, scalable, and robust task orchestration in LLM-based multi-agent systems.
📝 Abstract
With the development of LLMs as agents, there is a growing interest in connecting multiple agents into multi-agent systems to solve tasks concurrently, focusing on their role in task assignment and coordination. This paper explores how LLMs can effectively allocate computational tasks among multiple agents, considering factors such as cost, efficiency, and performance. In this work, we address key questions, including the effectiveness of LLMs as orchestrators and planners, comparing their effectiveness in task assignment and coordination. Our experiments demonstrate that LLMs can achieve high validity and accuracy in resource allocation tasks. We find that the planner method outperforms the orchestrator method in handling concurrent actions, resulting in improved efficiency and better utilization of agents. Additionally, we show that providing explicit information about worker capabilities enhances the allocation strategies of planners, particularly when dealing with suboptimal workers.