🤖 AI Summary
This work addresses the insufficient evaluation of autonomy and self-organization in large language models (LLMs) during long-term multi-agent collaboration. We propose the “Agent-as-Tool” paradigm and introduce Tool-RoCo—the first LLM-based self-organization benchmark for multi-robot coordination. Tool-RoCo defines four collaborative paradigms to quantify autonomy and, for the first time, systematically characterizes the dynamic evolution of agent activation and inter-agent coordination. We jointly evaluate correctness of output format, parameter accuracy, and coordination efficiency across three tasks: SORT, PACK, and CABINET. Experiments reveal that only 7.09% of LLM tool invocations serve collaborative purposes, while 96.42% involve isolated, single-agent activation—highlighting critical deficiencies in adaptive deactivation and deep coordination. Our framework provides a scalable, interpretable foundation for rigorous assessment of multi-agent LLMs.
📝 Abstract
This study proposes Tool-RoCo, a novel benchmark for evaluating large language models (LLMs) in long-term multi-agent cooperation based on RoCo, a multi-robot cooperative benchmark. Recent research on LLM-based multi-agent systems has relied on predefined orchestration, while ignoring agent autonomy. Tool-RoCo treats other agents as tools and introduces cooperative tools, leveraging tool usage to evaluate multi-agent cooperation and self-organization. Tool usage means that each agent (LLM) selects a tool from a candidate set based on the current state, receives feedback, and adjusts its selection in subsequent rounds. To evaluate different autonomy levels, we propose four LLM paradigms: (1) centralized cooperation, where a single LLM allocates tools to all agents; (2) centralized self-organization, where a central LLM autonomously activates agents while keeping others inactive; (3) decentralized cooperation, where each agent has its own LLM and calls tools based on local information; and (4) self-organization, where a randomly chosen initial agent can request collaboration, activating additional agents via tool calls. Tool-RoCo includes three multi-robot tasks, SORT, PACK, and CABINET, to measure format and parameter accuracy and agent coordination through tool usage. The results using several LLMs showed that cooperative tools accounted for only 7.09% of all tools, indicating that LLM-based agents rarely invoked others as assistants. Moreover, activation tools accounted for 96.42%, suggesting that current LLMs tend to maintain active agents while seldom deactivating them for adaptive coordination. Tool-RoCo provides a systematic benchmark to evaluate LLM autonomy and cooperation in multi-agent tasks. Code and Demo: https://github.com/ColaZhang22/Tool-Roco