🤖 AI Summary
Existing approaches for complex reasoning tasks suffer from low synergy among heterogeneous tools—such as textual reasoning, code execution, and web search—and lack practical, adaptive tool-use strategies.
Method: This paper proposes TUMIX, a test-time multi-agent scaling framework that deploys heterogeneous agents in parallel, each employing distinct, proactive tool-selection policies. It introduces a tool-usage mixture mechanism and a confidence-based dynamic termination strategy to reduce computational overhead without compromising accuracy. Furthermore, TUMIX enables deep integration of multi-source capabilities via iterative answer sharing, LLM-driven self-optimizing agent design, and cross-agent collaborative optimization.
Results: Evaluated on the Gemini family of models, TUMIX achieves an average accuracy gain of 3.55% with comparable inference cost; with dynamic termination enabled, computational cost drops to 49% of the baseline while preserving full performance.
📝 Abstract
While integrating tools like Code Interpreter and Search has significantly enhanced Large Language Model (LLM) reasoning in models like ChatGPT Agent and Gemini-Pro, practical guidance on optimal tool use is lacking. The core challenge is effectively combining textual reasoning, coding, and search for diverse questions. In this paper, we propose Tool-Use Mixture (TUMIX), an ensemble framework that runs multiple agents in parallel, each employing distinct tool-use strategies and answer paths. Agents in TUMIX iteratively share and refine responses based on the question and previous answers. In experiments, TUMIX achieves significant gains over state-of-the-art tool-augmented and test-time scaling methods, delivering an average accuracy improvement of up to 3.55% over the best baseline on Gemini-2.5-Pro and Gemini-2.5-Flash across key reasoning benchmarks, with near-equal inference costs. We find that agent diversity and quality are crucial and can be enhanced by using LLMs to auto-optimize agent designs. Furthermore, TUMIX can halt refinement upon reaching sufficient confidence, preserving performance at only 49% of the inference cost. Further scaling can achieve higher performance, albeit at a greater cost.