🤖 AI Summary
This work addresses the performance bottleneck in large language model inference caused by attention computation, which intensifies with growing context lengths and model scales. A key challenge arises when combining head parallelism with sparsification: heterogeneous sparsity elasticity across attention heads leads to GPU load imbalance. The study is the first to reveal that attention heads exhibit stable yet heterogeneous sparsity elasticity, and leverages this insight to propose a head-adaptive sparsity budgeting mechanism alongside a sparsity-aware head-parallel load balancing strategy (S-HPLB) for co-optimizing system-level computational efficiency. Experiments on long-context benchmarks demonstrate that the proposed approach reduces attention computation latency by an average factor of 2.88 without degrading inference quality.
📝 Abstract
With the increasing volumes of Large Language Models (LLMs) and the expanding context lengths, attention computation has become a key performance bottleneck in LLM serving. For fast attention computation, recent practices often parallelize the attention heads on multiple GPUs, and also widely adopt attention sparsification to reduce the computation amount -- which selectively computes a subset of attention pairs under a preset sparsity budget. In this paper, we notice that attention heads of an LLM model often exhibit heterogeneous-yet-stable sparsity elasticities, which motivates us to enforce head-adaptive sparsity budgets to attain better efficiency while preserving high inference quality. Yet, from the system aspect, with heterogeneous sparsity levels, attention computation time on different heads would be inconsistent, yielding cross-GPU resource bubbles under head-parallel deployment. To further minimize such bubbles, we propose a novel attention deployment strategy called Sparsity-aware Head-Parallel Load Balance (S-HPLB). Experiments on long-context benchmark show that, S-HPLB can achieve a $2.88\times$ improvement in average attention computation latency without quality degradation.