Optimizing Agentic Language Model Inference via Speculative Tool Calls

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Tool-augmented language model (LM) agents suffer from inference inefficiency due to frequent external tool invocations (e.g., APIs, code execution, web search). Method: We propose a system-level optimization framework featuring (i) speculative tool invocation—a novel mechanism that predicts and pre-schedules tool calls using sequence residency within the inference engine; (ii) a standardized “tool cache” API enabling vendor-agnostic integration; and (iii) theory-guided speculative configuration analysis, cache warm-up, and dynamic parameter tuning. Contributions/Results: Our approach achieves throughput of hundreds of tokens per second—marking the first systematic, highly efficient, and configurable optimization framework for tool-augmented LM agents. Experimental evaluation demonstrates substantial latency reduction and scalability improvement across diverse tool-calling workloads, without compromising accuracy or functional correctness.

Technology Category

Application Category

📝 Abstract
Language models (LMs) are becoming increasingly dependent on external tools. LM-based agentic frameworks frequently interact with their environment via such tools to search files, run code, call APIs, etc. Further, modern reasoning-based LMs use tools such as web search and Python code execution to enhance their reasoning capabilities. While tools greatly improve the capabilities of LMs, they also introduce performance bottlenecks during the inference process. In this paper, we introduce novel systems optimizations to address such performance bottlenecks by speculating tool calls and forcing sequences to remain resident in the inference engine to minimize overheads. Our optimizations lead to throughput improvements of several hundred tokens per second when hosting inference for LM agents. We provide a theoretical analysis of our algorithms to provide insights into speculation configurations that will yield the best performance. Further, we recommend a new "tool cache" API endpoint to enable LM providers to easily adopt these optimizations.
Problem

Research questions and friction points this paper is trying to address.

Optimizing inference performance for tool-using language models
Reducing bottlenecks from external tool calls in LM agents
Enhancing throughput via speculative tool call techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speculating tool calls to reduce inference bottlenecks
Keeping sequences resident in engine to minimize overheads
Proposing tool cache API for easy optimization adoption
🔎 Similar Papers
No similar papers found.