AutoTool: Efficient Tool Selection for Large Language Model Agents

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prohibitively high inference cost of LLM-based agents in complex tasks—particularly under frequent LLM invocation paradigms like ReAct—this paper proposes a graph-structured, efficient tool selection framework. The method models historical tool invocation sequences as a directed graph with transition probabilities, introducing the novel concept of “tool usage inertia” to capture sequential dependencies. Tool selection and parameter prediction are performed via trajectory-driven graph traversal, eliminating the need for repeated LLM calls. Input optimization is conducted at the parameter level to further enhance efficiency. Experiments across diverse task domains demonstrate an average 30% reduction in inference cost, while maintaining task completion rates comparable to baseline approaches. This yields significant improvements in agent efficiency and scalability without compromising performance.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) agents have emerged as powerful tools for automating complex tasks by leveraging the reasoning and decision-making abilities of LLMs. However, a major bottleneck in current agent frameworks lies in the high inference cost of tool selection, especially in approaches like ReAct that repeatedly invoke the LLM to determine which tool to use at each step. In this work, we propose AutoTool, a novel graph-based framework that bypasses repeated LLM inference by exploiting a key empirical observation: tool usage inertia - the tendency of tool invocations to follow predictable sequential patterns. AutoTool constructs a directed graph from historical agent trajectories, where nodes represent tools and edges capture transition probabilities, effectively modeling the inertia in tool selection. It further integrates parameter-level information to refine tool input generation. By traversing this structured representation, AutoTool efficiently selects tools and their parameters with minimal reliance on LLM inference. Extensive experiments across diverse agent tasks demonstrate that AutoTool reduces inference costs by up to 30% while maintaining competitive task completion rates, offering a practical and scalable enhancement for inference-heavy frameworks. Our work highlights the promise of integrating statistical structure into LLM agent design for greater efficiency without sacrificing performance.
Problem

Research questions and friction points this paper is trying to address.

Reducing high inference costs in LLM agent tool selection
Modeling tool usage inertia patterns to bypass repeated LLM calls
Enhancing efficiency while maintaining competitive task completion rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-based framework bypasses repeated LLM inference
Models tool selection inertia using transition probabilities
Integrates parameter-level information for input generation
🔎 Similar Papers
No similar papers found.
J
Jingyi Jia
School of Computer Science and Technology, Huazhong University of Science and Technology
Qinbin Li
Qinbin Li
Professor, Computer Science, Huazhong University of Science and Technology
Machine Learning SystemData ScienceFederated Learning