🤖 AI Summary
This work addresses key challenges in large language model (LLM) inference, including memory bandwidth bottlenecks, computational redundancy, and inefficiencies in processing long sequences. To overcome these limitations, the authors propose a co-optimized algorithm-hardware framework that integrates FP8 quantization, key-value cache compression (Opt-KV), grouped-query attention (Opt-GQA), and paged attention (Opt-Pa), augmented with lazy-loading memory mapping to enable efficient long-context inference on heterogeneous platforms. Evaluated on the LLaMa-13B-GPTQ model, the approach achieves up to a 13.43% increase in inference throughput and a 16.79% reduction in latency while preserving model accuracy, substantially outperforming existing methods.
📝 Abstract
Major challenges in LLMs inference remain frequent memory bandwidth bottlenecks, computational redundancy, and inefficiencies in long-sequence processing. To address these issues, we propose LLM-CoOpt, a comprehensive algorithmhardware co-design framework aimed at improving both throughput and latency in LLM inference. LLM-CoOpt integrates three key strategies: (1) Key-Value Cache Optimization, termed Opt-KV, which improves memory access efficiency by optimizing both KV cache write and read paths, and introduces FP8 quantization to reduce memory footprint while maintaining accuracy; (2) Grouped-Query Attention for Computational Efficiency, termed Opt-GQA, which reduces the overall computational complexity by restructuring multi-head self-attention into grouped-query attention with shared key-value projections, enabling higher throughput and lower resource consumption; (3) Paged Attention for Long- Sequence Processing, termed Opt-Pa, which adopts a two-step strategy to first segment long sequences into manageable chunks and then apply lazy memory mapping and computation, significantly reducing memory pressure and improving performance on long-context inputs.Experiments on the LLaMa-13BGPTQ model demonstrate that LLM-CoOpt increases inference throughput by up to 13.43%, reduces latency by up to 16.79%, and maintains model accuracy. These results confirm that LLM-CoOpt provides a practical, high-performance optimization path for real-world inference of large-scale language models.