A Unified Multi-Agent Framework for Universal Multimodal Understanding and Generation

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal architectures suffer from architectural rigidity, tight coupling among components, and reliance on joint training for extension. To address these limitations, this paper proposes MAGUS—a decoupled multi-agent framework that separates cognitive processing from decision-making. Its core contributions are threefold: (1) It introduces role-specialized agents—Perceiver, Planner, and Reflector—that collaborate symbolically within a shared textual latent space to enable cross-modal understanding and generation; (2) It designs a Growth-Aware Search mechanism that orchestrates large language models and diffusion models, enabling plug-and-play extensibility and semantic alignment without joint training; (3) It supports arbitrary modality translation among text, image, audio, and video. Experiments demonstrate that MAGUS significantly outperforms GPT-4o on benchmarks such as MME, achieving state-of-the-art performance and strong generalization in cross-modal instruction following and multimodal generation tasks.

Technology Category

Application Category

📝 Abstract
Real-world multimodal applications often require any-to-any capabilities, enabling both understanding and generation across modalities including text, image, audio, and video. However, integrating the strengths of autoregressive language models (LLMs) for reasoning and diffusion models for high-fidelity generation remains challenging. Existing approaches rely on rigid pipelines or tightly coupled architectures, limiting flexibility and scalability. We propose MAGUS (Multi-Agent Guided Unified Multimodal System), a modular framework that unifies multimodal understanding and generation via two decoupled phases: Cognition and Deliberation. MAGUS enables symbolic multi-agent collaboration within a shared textual workspace. In the Cognition phase, three role-conditioned multimodal LLM agents - Perceiver, Planner, and Reflector - engage in collaborative dialogue to perform structured understanding and planning. The Deliberation phase incorporates a Growth-Aware Search mechanism that orchestrates LLM-based reasoning and diffusion-based generation in a mutually reinforcing manner. MAGUS supports plug-and-play extensibility, scalable any-to-any modality conversion, and semantic alignment - all without the need for joint training. Experiments across multiple benchmarks, including image, video, and audio generation, as well as cross-modal instruction following, demonstrate that MAGUS outperforms strong baselines and state-of-the-art systems. Notably, on the MME benchmark, MAGUS surpasses the powerful closed-source model GPT-4o.
Problem

Research questions and friction points this paper is trying to address.

Unifying multimodal understanding and generation flexibly
Integrating autoregressive and diffusion models effectively
Enabling scalable any-to-any modality conversion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular multi-agent framework for multimodal tasks
Decoupled Cognition and Deliberation phases
Growth-Aware Search orchestrates LLMs and diffusion
🔎 Similar Papers
No similar papers found.
J
Jiulin Li
State Key Laboratory of General Artificial Intelligence, BIGAI
Ping Huang
Ping Huang
State Key Laboratory of General Artificial Intelligence, BIGAI
Yexin Li
Yexin Li
State Key Laboratory of General Artificial Intelligence BIGAI
reinforcement learningmulti-agent systemmulti-armed banditsdata mining
S
Shuo Chen
State Key Laboratory of General Artificial Intelligence, BIGAI
J
Juewen Hu
State Key Laboratory of General Artificial Intelligence, BIGAI
Y
Ye Tian
State Key Laboratory of Switching and Networking, Beijing University of Posts and Telecommunications