Beyond Accuracy: Unveiling Inefficiency Patterns in Tool-Integrated Reasoning

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of Tool-Integrated Reasoning (TIR) efficiency overlook the impact of tool-induced KV cache invalidation and inflated long responses, leading to inaccurate latency estimates. This work proposes PTE (Prefill Token Equivalents), a hardware-aware unified efficiency metric that explicitly models the latency contributions of non-reusable KV cache entries and tool-generated responses. Using PTE, we identify four prevalent inefficiency patterns in TIR and reveal a counterintuitive trend: higher PTE overhead often correlates with lower reasoning accuracy, challenging the common assumption that increased tool usage inherently improves performance. Empirical results demonstrate that PTE aligns closely with measured latency across diverse hardware platforms, significantly outperforming conventional token-counting metrics, and consistently uncovers a negative correlation between efficiency and accuracy across five major TIR benchmarks.
📝 Abstract
In real-world Tool-Integrated Reasoning (TIR) scenarios, where LLMs interleave reasoning with external tool calls, a major source of inefficiency is that the toolcalls create pauses between LLM requests and cause KV-Cache eviction, forcing recomputation. Also, the long, unfiltered response returned by external tools inflates the KV-Cache, so each decode step spends more time loading the growing cache and thus becomes steadily slower as context length increases. However, existing efficiency metrics like token counts and toolcall counts fail to capture the real model inference latency. To address this, we introduce PTE (Prefill Token Equivalents), a hardware-aware TIR-efficiency metric that unifies internal reasoning and external tool-use costs while explicitly accounting for non-reusable KV-Cache and long-tool-response scenarios. Validation in a high-concurrency industrial setting indicates that PTE aligns significantly better with wall-clock latency than standard token counts, while maintaining consistent efficiency rankings across diverse hardware profiles. We conduct extensive experiments across five TIR benchmarks, quantify their PTE costs, and identify four inefficiency patterns that appear in TIR. We also discover that trajectories with higher PTE costs tend to have lower reasoning correctness, indicating that simply using more tools does not improve the quality of the answer.
Problem

Research questions and friction points this paper is trying to address.

Tool-Integrated Reasoning
KV-Cache eviction
inference latency
efficiency metrics
long-tool-response
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tool-Integrated Reasoning
KV-Cache inefficiency
PTE metric
inference latency
hardware-aware efficiency
🔎 Similar Papers
No similar papers found.