🤖 AI Summary
To address high inference latency and computational cost in deploying large language model (LLM) agents, this paper proposes DySP—a dynamic speculative planning framework that employs online, asynchronous reinforcement learning without offline training. DySP integrates speculative execution, dynamic routing decisions, and multi-objective optimization to jointly regulate end-to-end latency and computational cost. It introduces the first tunable, lossless acceleration mechanism, enabling users to flexibly trade off speed against resource expenditure. Evaluated on two standard agent benchmarks, DySP achieves inference efficiency comparable to the current fastest lossless methods, while reducing total cost by 30% and cutting redundant computation by up to 60%. Critically, it incurs zero pre-deployment overhead.
📝 Abstract
Despite their remarkable success in complex tasks propelling widespread adoption, large language-model-based agents still face critical deployment challenges due to prohibitive latency and inference costs. While recent work has explored various methods to accelerate inference, existing approaches suffer from significant limitations: they either fail to preserve performance fidelity, require extensive offline training of router modules, or incur excessive operational costs. Moreover, they provide minimal user control over the tradeoff between acceleration and other performance metrics. To address these gaps, we introduce Dynamic Speculative Planning (DSP), an asynchronous online reinforcement learning framework that provides lossless acceleration with substantially reduced costs without requiring additional pre-deployment preparation. DSP explicitly optimizes a joint objective balancing end-to-end latency against dollar cost, allowing practitioners to adjust a single parameter that steers the system toward faster responses, cheaper operation, or any point along this continuum. Experiments on two standard agent benchmarks demonstrate that DSP achieves comparable efficiency to the fastest lossless acceleration method while reducing total cost by 30% and unnecessary cost up to 60%. Our code and data are available through https://github.com/guanyilin428/Dynamic-Speculative-Planning.