Brain-Inspired Graph Multi-Agent Systems for LLM Reasoning

πŸ“… 2026-03-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the accuracy collapse of large language models (LLMs) in complex multi-step reasoning tasks, which hinders performance breakthroughs. Inspired by the human Global Workspace Theory, we propose BIGMASβ€”an architecture that orchestrates multiple specialized LLM agents via a dynamic directed graph. A GraphDesigner module automatically constructs task-adaptive topologies, while a global state-aware Orchestrator schedules reasoning processes within a shared workspace, enabling orthogonal gains between architecture and model capabilities. Evaluated on challenging benchmarks such as Game24, Six Fives, and Tower of London, BIGMAS substantially enhances the reasoning performance of six mainstream LLMs and consistently outperforms established multi-agent baselines, including ReAct and Tree of Thoughts.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of language tasks, yet complex multi-step reasoning remains a fundamental challenge. While Large Reasoning Models (LRMs) equipped with extended chain-of-thought mechanisms demonstrate improved performance over standard LLMs, both model types still suffer from accuracy collapse on sufficiently complex tasks, suggesting that scaling model-level reasoning alone is insufficient. Inspired by the global workspace theory of human cognition, we propose Brain-Inspired Graph Multi-Agent Systems (BIGMAS), in which specialized LLM agents are organized as nodes in a dynamically constructed directed graph and coordinate exclusively through a centralized shared workspace. A problem-adaptive GraphDesigner constructs task-specific agent topologies, while a global Orchestrator leverages the complete shared state for routing decisions, overcoming the local-view bottleneck of reactive approaches. Experiments on Game24, Six Fives, and Tower of London across six frontier LLMs demonstrate that BIGMAS consistently improves reasoning performance for both standard LLMs and LRMs, outperforming existing multi-agent baselines including ReAct and Tree of Thoughts, showing that multi-agent architectural design provides complementary gains orthogonal to model-level reasoning enhancements.
Problem

Research questions and friction points this paper is trying to address.

multi-step reasoning
accuracy collapse
Large Language Models
complex reasoning tasks
reasoning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Multi-Agent Systems
Global Workspace Theory
Dynamic Agent Topology
Shared Workspace Coordination
LLM Reasoning
πŸ”Ž Similar Papers
No similar papers found.