🤖 AI Summary
To address the linear growth of KV cache and associated memory bottlenecks in test-time scaling (TTS) with long chain-of-thought (CoT) reasoning, this paper proposes an asynchronous sparse decoding framework. It introduces the first query-aware sparsification mechanism that operates *without waiting for the decoding loop*, enabled by a lightweight temporal regression module that predicts query states. This decouples cache filtering from autoregressive decoding, enabling fine-grained token-level selection and page-level sparse attention. By breaking sequential dependencies, the framework overlaps KV cache pruning with forward computation, significantly improving service efficiency under high-concurrency and long-CoT workloads. Evaluated on Qwen3-8B/32B, our method reduces per-token latency by over 20% compared to Quest and by ≥50% relative to full attention, while maintaining comparable or superior accuracy across multiple TTS benchmarks.
📝 Abstract
Test-time scaling (TTS) boosts LLM reasoning via long chain-of-thought (CoT), but the linear KV-cache growth amplifies the memory-bound bottleneck of LLM decoding. Query-aware page-level sparse decoding can achieve state-of-the-art performance under constrained FLOPs budgets, but is limited by both sequential-dependent page filtering and coarse-grained token selection, hampering serving efficiency and model performance on TTS tasks under high concurrency and long CoT scenarios (consuming even higher runtime than the forward pipeline itself). In this paper, we first find that the current-step query state can be accurately approximated in a unified manner from a short window of recent queries, enabling training-free query-aware sparsity without waiting in the decoding loop. We propose AsyncSpade, an asynchronous framework for efficient TTS built on two core components: (1) a novel light-weight temporal-regressive module that predicts the next-token query state; (2) an asynchronous and disaggregated framework that decouples the KV cache filtering from the auto-regressive decoding loop, overlapping the token-level KV selection with the forward inference computation through asynchronism. To our knowledge, AsyncSpade is the first to eliminate the sequential dependence without sacrificing model performance. We validate the effectiveness of AsyncSpade on common LLM serving setups with an A100 node, where AsyncSpade fully overlaps KV-cache operations with the inference pipeline, achieving theoretical optimal time-per-output-token (TPOT). Specifically, AsyncSpade delivers over 20% reduction on TPOT compared to SoTA baseline (i.e. Quest) and at least 50% TPOT reduction compared to full attention on Qwen3-8B and Qwen3-32B models, while matching or surpassing their accuracy on various TTS benchmarks (AIME-24/25, GPQA-Diamond, MATH-500).