🤖 AI Summary
This study investigates whether multi-agent large language model systems can achieve reliable collaborative computation—not merely information exchange—in distributed information environments. To this end, the authors construct a scalable benchmark comprising 30 algorithmic tasks spanning three levels of communication complexity and conduct 1,620 experiments across 54 configurations. The work uncovers a previously unreported “communication–reasoning gap”: while agents actively communicate and form reasonable interaction topologies, they struggle to effectively integrate distributed states to produce correct outcomes. Results demonstrate that as the number of agents increases, coordination overhead offsets or even reverses the benefits of parallelization, revealing that naive scaling cannot overcome contextual limitations. The study contributes a role-agnostic simulation framework, a systematic task design methodology, and a novel approach to analyzing communication complexity.
📝 Abstract
Large language models are increasingly deployed in multi-agent systems to overcome context limitations by distributing information across agents. Yet whether agents can reliably compute with distributed information -- rather than merely exchange it -- remains an open question. We introduce Silo-Bench, a role-agnostic benchmark of 30 algorithmic tasks across three communication complexity levels, evaluating 54 configurations over 1,620 experiments. Our experiments expose a fundamental Communication-Reasoning Gap: agents spontaneously form task-appropriate coordination topologies and exchange information actively, yet systematically fail to synthesize distributed state into correct answers. The failure is localized to the reasoning-integration stage -- agents often acquire sufficient information but cannot integrate it. This coordination overhead compounds with scale, eventually eliminating parallelization gains entirely. These findings demonstrate that naively scaling agent count cannot circumvent context limitations, and Silo-Bench provides a foundation for tracking progress toward genuinely collaborative multi-agent systems.