PackInfer: Compute- and I/O-Efficient Attention for Batched LLM Inference

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges posed by highly heterogeneous sequence lengths in batched large language model (LLM) inference, which lead to imbalanced compute and I/O loads, low GPU utilization, and high tail latency. To tackle this, the authors propose an attention execution framework tailored for heterogeneous batching, achieving the first kernel-level co-optimization that is both computation- and I/O-aware. The framework employs load-balanced grouping, colocates shared prefixes, reorganizes KV cache layouts, and introduces a customized packed attention kernel to effectively eliminate redundant computation and memory fragmentation. Experimental results on real-world workloads demonstrate that the proposed approach reduces inference latency by 13.0–20.1% and improves throughput by 20% compared to FlashAttention.

Technology Category

Application Category

📝 Abstract
Attention efficiency is critical to large language model (LLM) inference. While prior advances optimize attention execution for individual requests (e.g., FlashAttention), production LLM serving relies on batching requests with highly heterogeneous sequence lengths for high serving throughput. This mismatch induces severe computation and I/O imbalance, exacerbates stragglers, and underutilizes GPU resources. We present PackInfer, a kernel-level attention framework that enables compute- and I/O-aware execution for heterogeneous batched inference. PackInfer orchestrates batched requests into load-balanced execution groups, effectively saturating GPU utilization by packing multiple requests into unified kernel launches. By constructing attention kernels directly over packed query-key regions, PackInfer eliminates redundant computation and balances thread-block execution. It then incorporates I/O-aware grouping that co-locates shared-prefix requests and reorganizes KV caches into group-contiguous layouts, reducing memory fragmentation and redundant data movement as generation evolves. Evaluations on real-world workloads show that PackInfer reduces inference latency by 13.0-20.1%, and improves throughput by 20% compared to the state-of-the-art FlashAttention.
Problem

Research questions and friction points this paper is trying to address.

attention efficiency
batched LLM inference
heterogeneous sequence lengths
compute imbalance
I/O imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

PackInfer
batched inference
attention optimization
I/O-aware grouping
GPU utilization
🔎 Similar Papers
No similar papers found.
Rui Ning
Rui Ning
Old Dominion University
Secure AI
W
Wei Zhang
Siebel Center for Computer Science, University of Illinois Urbana-Champaign, Urbana, IL, USA
Fan Lai
Fan Lai
University of Illinois Urbana-Champaign
Machine Learning SystemsCloud ComputingMachine Learning