Attention-MoA: Enhancing Mixture-of-Agents via Inter-Agent Semantic Attention and Deep Residual Synthesis

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Mixture-of-Agents approaches struggle to effectively suppress hallucinations and refine reasoning logic due to insufficient deep semantic interaction among agents. To address this limitation, this work proposes an Inter-Agent Semantic Attention mechanism coupled with a cross-layer residual module featuring adaptive early stopping, thereby enhancing information fusion and logical self-correction in multi-agent collaboration. The proposed method achieves a 91.15% win rate on AlpacaEval 2.0, scores 8.83 on MT-Bench, and leads in 10 out of 12 capabilities on the FLASK benchmark, demonstrating that an ensemble of smaller models can surpass the performance of Claude-4.5-Sonnet and GPT-4.1.

Technology Category

Application Category

📝 Abstract
As the development of Large Language Models (LLMs) shifts from parameter scaling to inference-time collaboration, the Mixture-of-Agents (MoA) framework has emerged as a general paradigm to harness collective intelligence by layering diverse models. While recent MoA variants have introduced dynamic routing and residual connections to improve efficiency, these methods often fail to facilitate deep semantic interaction between agents, limiting the system's ability to actively correct hallucinations and refine logic. In this paper, we introduce Attention-MoA, a novel MoA-based framework that redefines collaboration through Inter-agent Semantic Attention. Complemented by an Inter-layer Residual Module with Adaptive Early Stopping Mechanism, our architecture mitigates information degradation in deep layers while improving computational efficiency. Extensive evaluations across AlpacaEval 2.0, MT-Bench, and FLASK demonstrate that Attention-MoA significantly outperforms state-of-the-art baselines, achieving a 91.15% Length-Controlled Win Rate on AlpacaEval 2.0 and dominating in 10 out of 12 capabilities on FLASK. Notably, Attention-MoA enables an ensemble of small open-source models to outperform massive proprietary models like Claude-4.5-Sonnet and GPT-4.1, achieving an MT-Bench score of 8.83 and an AlpacaEval 2.0 LC Win Rate of 77.36%.
Problem

Research questions and friction points this paper is trying to address.

Mixture-of-Agents
semantic interaction
hallucination correction
multi-agent collaboration
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inter-agent Semantic Attention
Mixture-of-Agents
Residual Synthesis
Adaptive Early Stopping
Collective Intelligence
🔎 Similar Papers
No similar papers found.
J
Jianyu Wen
Meituan LongCat Interaction Team
Yang Wei
Yang Wei
Chongqing University of Posts and Telecommunications
adversarial attackimage forgery detectionimage processing
X
Xiongxi Yu
Meituan LongCat Interaction Team
C
Changxuan Xiao
Meituan LongCat Interaction Team
K
Ke Zeng
Meituan LongCat Interaction Team