🤖 AI Summary
This work addresses the inefficiency in multi-agent systems for complex reasoning tasks, where uniformly deploying large language models (LLMs) leads to excessive computational costs and overlooks the varying cognitive demands across different reasoning stages. To overcome this limitation, the authors propose the OI-MAS framework, which leverages a heterogeneous pool of multi-scale LLMs and integrates state-dependent dynamic routing with a confidence-aware mechanism. This enables adaptive allocation of agent roles and model scales during reasoning, precisely aligning task requirements with model capabilities. Experimental results demonstrate that OI-MAS not only maintains or improves performance but also achieves up to a 12.88% increase in accuracy while reducing computational costs by as much as 79.78%.
📝 Abstract
While multi-agent systems (MAS) have demonstrated superior performance over single-agent approaches in complex reasoning tasks, they often suffer from significant computational inefficiencies. Existing frameworks typically deploy large language models (LLMs) uniformly across all agent roles, failing to account for the varying cognitive demands of different reasoning stages. We address this inefficiency by proposing OI-MAS framework, a novel multi-agent framework that implements an adaptive model-selection policy across a heterogeneous pool of multi-scale LLMs. Specifically, OI-MAS introduces a state-dependent routing mechanism that dynamically selects agent roles and model scales throughout the reasoning process. In addition, we introduce a confidence-aware mechanism that selects appropriate model scales conditioned on task complexity, thus reducing unnecessary reliance on large-scale models. Experimental results show that OI-MAS consistently outperforms baseline multi-agent systems, improving accuracy by up to 12.88\% while reducing cost by up to 79.78\%.