PAT: Accelerating LLM Decoding via Prefix-Aware Attention with Resource Efficient Multi-Tile Kernel

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing attention implementations ignore the prevalent hierarchical shared prefixes across requests—such as system prompts, RAG contexts, and templates—leading to redundant KV cache loading, low on-chip resource utilization, increased memory bandwidth pressure, and decoding stalls. This work proposes a prefix-aware attention execution framework, introducing the novel *pack-forward-merge* paradigm: queries are aggregated by shared prefixes; a resource-adaptive multi-tile kernel is designed, integrating multi-stream parallelism, fine-grained KV partitioning, and online softmax merging. The approach significantly reduces memory access redundancy and computational idleness. Experiments on real-world and synthetic workloads show an average 67.4% reduction in attention latency and a 13.6%–83.4% decrease in time-per-output-token (TPOT). The framework delivers an efficient, scalable acceleration solution for memory-bound large language model inference.

Technology Category

Application Category

📝 Abstract
LLM serving is increasingly dominated by decode attention, which is a memory-bound operation due to massive KV cache loading from global memory. Meanwhile, real-world workloads exhibit substantial, hierarchical shared prefixes across requests (e.g., system prompts, tools/templates, RAG). Existing attention implementations fail to fully exploit prefix sharing: *one-query-per-CTA* execution repeatedly loads shared prefix KV cache, while *one-size-fits-all* tiling leaves on-chip resources idle and exacerbates bubbles for uneven KV lengths. These choices amplify memory bandwidth pressure and stall memory-bound decode attention. This paper introduces PAT, a prefix-aware attention kernel implementation for LLM decoding that organizes execution with a pack-forward-merge paradigm. PAT packs queries by shared prefix to reduce repeated memory accesses, runs a customized multi-tile kernel to achieve high resource efficiency. It further applies practical multi-stream forwarding and KV splitting to reduce resource bubbles. The final merge performs online softmax with negligible overhead. We implement PAT as an off-the-shelf plugin for vLLM. Evaluation on both real-world and synthetic workloads shows that PAT reduces attention latency by 67.4% on average and TPOT by 13.6-83.4% under the same configurations against state-of-the-art attention kernels.
Problem

Research questions and friction points this paper is trying to address.

Reduces repeated KV cache loading for shared prefixes
Improves resource efficiency with multi-tile kernel design
Minimizes resource bubbles via multi-stream and KV splitting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prefix-aware packing reduces repeated KV cache memory accesses
Custom multi-tile kernel improves on-chip resource utilization efficiency
Multi-stream forwarding and KV splitting minimize resource idle bubbles
🔎 Similar Papers
No similar papers found.
J
Jinjun Yi
Tianjin University, Tianjin, China
Z
Zhixin Zhao
Tianjin University, Tianjin, China
Yitao Hu
Yitao Hu
Professor, Tianjin University
LLM SystemDNN SystemAI for Science
K
Ke Yan
Tianjin University, Tianjin, China
W
Weiwei Sun
Tianjin University, Tianjin, China
H
Hao Wang
Stevens Institute of Technology, Hoboken, NJ, USA
Laiping Zhao
Laiping Zhao
Tianjin University
Cloud ComputingData centerSDN
Y
Yuhao Zhang
Tianjin University, Tianjin, China
W
Wenxin Li
Tianjin University, Tianjin, China
K
Keqiu Li
Tianjin University, Tianjin, China