Demystifying and Enhancing the Efficiency of Large Language Model Based Search Agents

πŸ“… 2025-05-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
LLM-based search agents suffer from severe efficiency bottlenecks in dynamic problem decomposition and alternating reasoning-retrieval execution: precise retrieval incurs high latency, while coarse-grained retrieval necessitates additional reasoning; inefficient scheduling and retrieval-induced blocking cause cascading delays, where even minor retrieval latencies are significantly amplified. Method: We propose SearchAgent-Xβ€”the first high-throughput, low-latency inference framework for LLM search agents supporting priority-aware scheduling and non-blocking retrieval. Contributions/Results: Its core innovations include high-recall approximate retrieval, dynamic priority-driven task scheduling, decoupled integration of retrieval and reasoning, and end-to-end LLM inference optimization. Experiments demonstrate that, without any degradation in generation quality, SearchAgent-X achieves 3.4Γ— higher throughput and 5Γ— lower end-to-end latency compared to state-of-the-art systems including vLLM and HNSW.

Technology Category

Application Category

πŸ“ Abstract
Large Language Model (LLM)-based search agents have shown remarkable capabilities in solving complex tasks by dynamically decomposing problems and addressing them through interleaved reasoning and retrieval. However, this interleaved paradigm introduces substantial efficiency bottlenecks. First, we observe that both highly accurate and overly approximate retrieval methods degrade system efficiency: exact search incurs significant retrieval overhead, while coarse retrieval requires additional reasoning steps during generation. Second, we identify inefficiencies in system design, including improper scheduling and frequent retrieval stalls, which lead to cascading latency -- where even minor delays in retrieval amplify end-to-end inference time. To address these challenges, we introduce SearchAgent-X, a high-efficiency inference framework for LLM-based search agents. SearchAgent-X leverages high-recall approximate retrieval and incorporates two key techniques: priority-aware scheduling and non-stall retrieval. Extensive experiments demonstrate that SearchAgent-X consistently outperforms state-of-the-art systems such as vLLM and HNSW-based retrieval across diverse tasks, achieving up to 3.4$ imes$ higher throughput and 5$ imes$ lower latency, without compromising generation quality. SearchAgent-X is available at https://github.com/tiannuo-yang/SearchAgent-X.
Problem

Research questions and friction points this paper is trying to address.

Efficiency bottlenecks in LLM-based search agents
Ineffective retrieval methods degrading system performance
System design inefficiencies causing cascading latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

High-recall approximate retrieval for efficiency
Priority-aware scheduling to reduce delays
Non-stall retrieval to prevent latency amplification
Tiannuo Yang
Tiannuo Yang
University of Southern California
Machine Learning SystemsCloud Computing
Z
Zebin Yao
Nankai University
Bowen Jin
Bowen Jin
University of Illinois, Urbana Champaign
large language modelsagentsRL
L
Lixiao Cui
Nankai University
Yusen Li
Yusen Li
Nankai University
G
Gang Wang
Nankai University
X
Xiaoguang Liu
Nankai University