🤖 AI Summary
Existing attention implementations ignore the prevalent hierarchical shared prefixes across requests—such as system prompts, RAG contexts, and templates—leading to redundant KV cache loading, low on-chip resource utilization, increased memory bandwidth pressure, and decoding stalls. This work proposes a prefix-aware attention execution framework, introducing the novel *pack-forward-merge* paradigm: queries are aggregated by shared prefixes; a resource-adaptive multi-tile kernel is designed, integrating multi-stream parallelism, fine-grained KV partitioning, and online softmax merging. The approach significantly reduces memory access redundancy and computational idleness. Experiments on real-world and synthetic workloads show an average 67.4% reduction in attention latency and a 13.6%–83.4% decrease in time-per-output-token (TPOT). The framework delivers an efficient, scalable acceleration solution for memory-bound large language model inference.
📝 Abstract
LLM serving is increasingly dominated by decode attention, which is a memory-bound operation due to massive KV cache loading from global memory. Meanwhile, real-world workloads exhibit substantial, hierarchical shared prefixes across requests (e.g., system prompts, tools/templates, RAG). Existing attention implementations fail to fully exploit prefix sharing: *one-query-per-CTA* execution repeatedly loads shared prefix KV cache, while *one-size-fits-all* tiling leaves on-chip resources idle and exacerbates bubbles for uneven KV lengths. These choices amplify memory bandwidth pressure and stall memory-bound decode attention.
This paper introduces PAT, a prefix-aware attention kernel implementation for LLM decoding that organizes execution with a pack-forward-merge paradigm. PAT packs queries by shared prefix to reduce repeated memory accesses, runs a customized multi-tile kernel to achieve high resource efficiency. It further applies practical multi-stream forwarding and KV splitting to reduce resource bubbles. The final merge performs online softmax with negligible overhead. We implement PAT as an off-the-shelf plugin for vLLM. Evaluation on both real-world and synthetic workloads shows that PAT reduces attention latency by 67.4% on average and TPOT by 13.6-83.4% under the same configurations against state-of-the-art attention kernels.