Reducing Latency of LLM Search Agent via Speculation-based Algorithm-System Co-Design

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM)-based search agents suffer from high end-to-end latency due to sequential reasoning and tool invocation. Method: We propose SPAgent—the first algorithm-system co-designed speculative framework for search agents—extending speculative execution to early, simple steps. It introduces two-stage adaptive speculation, comprising LLM action prediction and verification-skipping, coupled with two-level dynamic request scheduling. Contribution/Results: SPAgent breaks the overhead bottleneck of conventional “predict-then-verify” paradigms, enabling real-system deployment and load-aware adaptation. Evaluated on multi-step search tasks, it achieves up to 1.65× end-to-end speedup while maintaining or improving task accuracy, significantly enhancing the practicality of LLM search agents.

Technology Category

Application Category

📝 Abstract
LLM-based search agents achieve strong performance but suffer from severe latency, as each step requires serialized LLM reasoning followed by action of tool execution. We revisit this bottleneck through the lens of speculation. While traditional predict-verify speculation paradigm can break serial execution, its benefit remains limited, as it retains the full original workload and adds extra inference overhead. We observe that early agent steps often involve simple evidence-gathering, where correct actions can often be predicted without full reasoning. Building on these observations, we present SPAgent, an algorithm-system co-design framework that expands the role of speculation in search agents to reduce latency. Algorithmically, SPAgent introduces a two-phase adaptive speculation mechanism that selectively omits verification when safe. System-wise, a two-level scheduler regulates speculative requests based on engine load to ensure speculation remains beneficial. We implement SPAgent in real-world systems. Across extensive experimental settings, SPAgent achieves up to $1.65 imes$ end-to-end speedup while maintaining same or even achieving higher accuracy, enabling practical deployment of multi-step search agents.
Problem

Research questions and friction points this paper is trying to address.

Reducing latency in LLM-based search agents via speculation-based co-design
Addressing serial execution bottlenecks in multi-step search agent workflows
Optimizing speculation mechanisms to maintain accuracy while accelerating performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-phase adaptive speculation mechanism for agents
Two-level scheduler for speculative request regulation
Algorithm-system co-design framework reducing latency
🔎 Similar Papers