MATA: A Trainable Hierarchical Automaton System for Multi-Agent Visual Reasoning

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of current vision-language models, which often lack interpretability and are prone to hallucination in complex reasoning tasks, as well as the inability of existing compositional approaches to dynamically coordinate multi-agent collaboration. To overcome these challenges, we propose MATA—the first trainable hierarchical multi-agent automaton architecture—where learnable super-agents dynamically orchestrate sub-agents via adaptive scheduling rules and a shared memory mechanism to enable transparent, controllable collaborative reasoning. We introduce the MATA-SFT-90K dataset for supervised fine-tuning and leverage trajectory trees to support task-adaptive coordination. Experimental results demonstrate that MATA significantly outperforms both monolithic models and existing compositional methods across multiple visual reasoning benchmarks while providing interpretable execution trajectories.

Technology Category

Application Category

📝 Abstract
Recent vision-language models have strong perceptual ability but their implicit reasoning is hard to explain and easily generates hallucinations on complex queries. Compositional methods improve interpretability, but most rely on a single agent or hand-crafted pipeline and cannot decide when to collaborate across complementary agents or compete among overlapping ones. We introduce MATA (Multi-Agent hierarchical Trainable Automaton), a multi-agent system presented as a hierarchical finite-state automaton for visual reasoning whose top-level transitions are chosen by a trainable hyper agent. Each agent corresponds to a state in the hyper automaton, and runs a small rule-based sub-automaton for reliable micro-control. All agents read and write a shared memory, yielding transparent execution history. To supervise the hyper agent's transition policy, we build transition-trajectory trees and transform to memory-to-next-state pairs, forming the MATA-SFT-90K dataset for supervised finetuning (SFT). The finetuned LLM as the transition policy understands the query and the capacity of agents, and it can efficiently choose the optimal agent to solve the task. Across multiple visual reasoning benchmarks, MATA achieves the state-of-the-art results compared with monolithic and compositional baselines. The code and dataset are available at https://github.com/ControlNet/MATA.
Problem

Research questions and friction points this paper is trying to address.

visual reasoning
multi-agent system
compositional reasoning
hallucination
interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent system
hierarchical automaton
trainable transition policy
shared memory
visual reasoning
🔎 Similar Papers
No similar papers found.