TUMIX: Multi-Agent Test-Time Scaling with Tool-Use Mixture

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches for complex reasoning tasks suffer from low synergy among heterogeneous tools—such as textual reasoning, code execution, and web search—and lack practical, adaptive tool-use strategies. Method: This paper proposes TUMIX, a test-time multi-agent scaling framework that deploys heterogeneous agents in parallel, each employing distinct, proactive tool-selection policies. It introduces a tool-usage mixture mechanism and a confidence-based dynamic termination strategy to reduce computational overhead without compromising accuracy. Furthermore, TUMIX enables deep integration of multi-source capabilities via iterative answer sharing, LLM-driven self-optimizing agent design, and cross-agent collaborative optimization. Results: Evaluated on the Gemini family of models, TUMIX achieves an average accuracy gain of 3.55% with comparable inference cost; with dynamic termination enabled, computational cost drops to 49% of the baseline while preserving full performance.

Technology Category

Application Category

📝 Abstract
While integrating tools like Code Interpreter and Search has significantly enhanced Large Language Model (LLM) reasoning in models like ChatGPT Agent and Gemini-Pro, practical guidance on optimal tool use is lacking. The core challenge is effectively combining textual reasoning, coding, and search for diverse questions. In this paper, we propose Tool-Use Mixture (TUMIX), an ensemble framework that runs multiple agents in parallel, each employing distinct tool-use strategies and answer paths. Agents in TUMIX iteratively share and refine responses based on the question and previous answers. In experiments, TUMIX achieves significant gains over state-of-the-art tool-augmented and test-time scaling methods, delivering an average accuracy improvement of up to 3.55% over the best baseline on Gemini-2.5-Pro and Gemini-2.5-Flash across key reasoning benchmarks, with near-equal inference costs. We find that agent diversity and quality are crucial and can be enhanced by using LLMs to auto-optimize agent designs. Furthermore, TUMIX can halt refinement upon reaching sufficient confidence, preserving performance at only 49% of the inference cost. Further scaling can achieve higher performance, albeit at a greater cost.
Problem

Research questions and friction points this paper is trying to address.

Optimizing tool combination for diverse question types
Enhancing multi-agent collaboration through iterative refinement
Balancing performance gains with computational cost efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel multi-agent ensemble with diverse tool-use strategies
Iterative response sharing and refinement among agents
Auto-optimized agent designs using LLMs for enhanced diversity
🔎 Similar Papers