🤖 AI Summary
Hybrid Mixture-of-Agents (MoA) methods implicitly assume that model diversity inherently improves performance over single-model baselines. This work systematically investigates the efficacy boundaries of multi-LLM ensembling under this assumption.
Method: We propose Self-MoA—a novel paradigm that enhances a single state-of-the-art LLM via iterative sampling, output re-ranking, and weighted aggregation—eliminating reliance on heterogeneous model collaboration inherent in conventional MoA.
Contribution/Results: We demonstrate that output quality dominates model diversity as the primary driver of ensemble performance. We further design a scalable sequential aggregation variant and validate across AlpacaEval 2.0, MMLU, CRUX-Eval, and MATH. Self-MoA outperforms standard MoA by +6.6% on AlpacaEval 2.0 and +3.8% on average across benchmarks, enabling top-tier models to achieve leaderboard supremacy on AlpacaEval 2.0.
📝 Abstract
Ensembling outputs from diverse sources is a straightforward yet effective approach to boost performance. Mixture-of-Agents (MoA) is one such popular ensemble method that aggregates outputs from multiple different Large Language Models (LLMs). This paper raises the question in the context of language models: is mixing different LLMs truly beneficial? We propose Self-MoA -- an ensemble method that aggregates outputs from only the single top-performing LLM. Our extensive experiments reveal that, surprisingly, Self-MoA outperforms standard MoA that mixes different LLMs in a large number of scenarios: Self-MoA achieves $6.6%$ improvement over MoA on the AlpacaEval 2.0 benchmark, and an average of $3.8%$ improvement across various benchmarks, including MMLU, CRUX, and MATH. Applying Self-MoA to one of the top-ranking models in AlpacaEval 2.0 directly achieves the new state-of-the-art performance on the leaderboard. To understand the effectiveness of Self-MoA, we systematically investigate the trade-off between diversity and quality of outputs under various MoA settings. We confirm that the MoA performance is rather sensitive to the quality, and mixing different LLMs often lowers the average quality of the models. To complement the study, we identify the scenarios where mixing different LLMs could be helpful. This paper further introduces a sequential version of Self-MoA, that is capable of aggregating a large number of LLM outputs on-the-fly over multiple rounds, and is as effective as aggregating all outputs at once.