Towards Efficient Agents: A Co-Design of Inference Architecture and System

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LLMs deployed as autonomous agents suffer from systemic latency bottlenecks arising from multi-turn reasoning loops, continuously expanding context windows, and heterogeneous tool interactions. To address this, we propose AgentInfer—a holistic, end-to-end inference acceleration framework featuring a novel four-module co-evolving engine: (1) hierarchical dual-model reasoning (AgentCollab); (2) cache-aware hybrid scheduling (AgentSched); (3) semantic caching via suffix automaton–guided speculative decoding (AgentSAM); and (4) asynchronous semantic memory compression (AgentCompress). By jointly optimizing agent reasoning architecture and system-level execution, AgentInfer reduces redundant token generation by over 50% and accelerates end-to-end inference by 1.8–2.5× on the BrowseComp-zh and DeepDiver benchmarks—without any accuracy degradation.

Technology Category

Application Category

📝 Abstract
The rapid development of large language model (LLM)-based agents has unlocked new possibilities for autonomous multi-turn reasoning and tool-augmented decision-making. However, their real-world deployment is hindered by severe inefficiencies that arise not from isolated model inference, but from the systemic latency accumulated across reasoning loops, context growth, and heterogeneous tool interactions. This paper presents AgentInfer, a unified framework for end-to-end agent acceleration that bridges inference optimization and architectural design. We decompose the problem into four synergistic components: AgentCollab, a hierarchical dual-model reasoning framework that balances large- and small-model usage through dynamic role assignment; AgentSched, a cache-aware hybrid scheduler that minimizes latency under heterogeneous request patterns; AgentSAM, a suffix-automaton-based speculative decoding method that reuses multi-session semantic memory to achieve low-overhead inference acceleration; and AgentCompress, a semantic compression mechanism that asynchronously distills and reorganizes agent memory without disrupting ongoing reasoning. Together, these modules form a Self-Evolution Engine capable of sustaining efficiency and cognitive stability throughout long-horizon reasoning tasks. Experiments on the BrowseComp-zh and DeepDiver benchmarks demonstrate that through the synergistic collaboration of these methods, AgentInfer reduces ineffective token consumption by over 50%, achieving an overall 1.8-2.5 times speedup with preserved accuracy. These results underscore that optimizing for agentic task completion-rather than merely per-token throughput-is the key to building scalable, efficient, and self-improving intelligent systems.
Problem

Research questions and friction points this paper is trying to address.

Accelerates multi-turn reasoning agents by co-designing inference architecture and system optimization.
Reduces systemic latency from reasoning loops, context growth, and tool interactions.
Optimizes agentic task completion efficiency rather than just per-token throughput.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical dual-model reasoning balances large and small model usage
Suffix-automaton speculative decoding reuses semantic memory for acceleration
Asynchronous semantic compression distills agent memory without disruption