SpeContext: Enabling Efficient Long-context Reasoning with Speculative Context Sparsity in LLMs

📅 2025-11-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low throughput and high latency in long-context inference of large language models (LLMs) caused by KV cache bottlenecks, this paper proposes SpeContext—a holistic algorithm-system-compiler co-optimization framework. Methodologically, it introduces: (1) a lightweight retrieval head based on knowledge distillation and attention head weighting for efficient context filtering; (2) an asynchronous prefetching dataflow with elastic KV cache loading to overlap computation and memory access; and (3) an information-theoretic adaptive memory manager enabling head-level attention pruning and GPU memory utilization optimization. Evaluated on cloud and edge devices, SpeContext achieves up to 24.89× higher throughput and 10.06× faster inference compared to Hugging Face, with negligible accuracy degradation. The framework significantly expands the Pareto frontier of accuracy–throughput trade-offs.

Technology Category

Application Category

📝 Abstract
In this paper, we point out that the objective of the retrieval algorithms is to align with the LLM, which is similar to the objective of knowledge distillation in LLMs. We analyze the similarity in information focus between the distilled language model(DLM) and the original LLM from the perspective of information theory, and thus propose a novel paradigm that leverages a DLM as the retrieval algorithm. Based on the insight, we present SpeContext, an algorithm and system co-design for long-context reasoning. (1) At the algorithm level, SpeContext proposes lightweight retrieval head based on the head-level attention weights of DLM, achieving > 90% parameters reduction by pruning the redundancy. (2) At the system level, SpeContext designs an asynchronous prefetch dataflow via the elastic loading strategy, effectively overlapping KV cache retrieval with the LLM computation. (3) At the compilation level, SpeContext constructs the theoretical memory model and implements an adaptive memory management system to achieve acceleration by maximizing GPU memory utilization. We deploy and evaluate SpeContext in two resourceconstrained environments, cloud and edge. Extensive experiments show that, compared with the Huggingface framework, SpeContext achieves up to 24.89x throughput improvement in cloud and 10.06x speedup in edge with negligible accuracy loss, pushing the Pareto frontier of accuracy and throughput.
Problem

Research questions and friction points this paper is trying to address.

Enhances long-context reasoning efficiency in LLMs via speculative context sparsity.
Reduces retrieval parameters and overlaps KV cache retrieval with computation.
Maximizes GPU memory utilization for acceleration in cloud and edge environments.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight retrieval head reduces parameters by pruning redundancy.
Asynchronous prefetch dataflow overlaps KV cache retrieval with computation.
Adaptive memory management maximizes GPU utilization for acceleration.
J
Jiaming Xu
Shanghai Jiao Tong University; SII
J
Jiayi Pan
Shanghai Jiao Tong University; Infinigence-AI
H
Hanzhen Wang
Shanghai Jiao Tong University
Y
Yongkang Zhou
Shanghai Jiao Tong University; SII
J
Jiancai Ye
Shanghai Jiao Tong University
Y
Yu Wang
Tsinghua University
Guohao Dai
Guohao Dai
Associate Professor of Shanghai Jiao Tong University
Sparse ComputationLarge-scale Graph ProcessingFPGACircuits and Systems