Efficient LLM Inference using Dynamic Input Pruning and Cache-Aware Masking

📅 2024-12-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the exacerbated DRAM bandwidth bottleneck in mobile LLM inference and the ineffectiveness of conventional dynamic sparsification on SwiGLU layers—due to their inherent lack of activation sparsity—this paper proposes predictor-free Dynamic Input Pruning (DIP), the first method enabling efficient sparsification of SwiGLU layers. DIP integrates lightweight LoRA adapters to compensate for accuracy loss and introduces a novel mask mechanism that jointly leverages cache state and activation magnitude to improve cache hit rates. Crucially, DIP requires no auxiliary prediction modules, maintains full compatibility with existing hardware stacks, and supports Flash-optimized streaming loading from flash storage. Evaluated on Phi-3-Medium, DIP reduces memory footprint by 46% and increases token generation throughput by 40% over baseline Flash loading, while incurring negligible perplexity degradation (<0.1). It significantly outperforms prior sparsification approaches in both efficiency and accuracy.

Technology Category

Application Category

📝 Abstract
While mobile devices provide ever more compute power, improvements in DRAM bandwidth are much slower. This is unfortunate for large language model (LLM) token generation, which is heavily memory-bound. Previous work has proposed to leverage natural dynamic activation sparsity in ReLU-activated LLMs to reduce effective DRAM bandwidth per token. However, more recent LLMs use SwiGLU instead of ReLU, which results in little inherent sparsity. While SwiGLU activations can be pruned based on magnitude, the resulting sparsity patterns are difficult to predict, rendering previous approaches ineffective. To circumvent this issue, our work introduces Dynamic Input Pruning (DIP): a predictor-free dynamic sparsification approach, which preserves accuracy with minimal fine-tuning. DIP can further use lightweight LoRA adapters to regain some performance lost during sparsification. Lastly, we describe a novel cache-aware masking strategy, which considers the cache state and activation magnitude to further increase cache hit rate, improving LLM token rate on mobile devices. DIP outperforms other methods in terms of accuracy, memory and throughput trade-offs across simulated hardware settings. On Phi-3-Medium, DIP achieves a 46% reduction in memory and 40% increase in throughput with $<$ 0.1 loss in perplexity when compared to streaming the dense model from Flash. The open source code for HW simulator, methods, and experiments in this paper is available at https://github.com/Qualcomm-AI-research/dynamic-sparsity .
Problem

Research questions and friction points this paper is trying to address.

Reduces DRAM bandwidth for LLM token generation
Addresses sparsity unpredictability in SwiGLU-activated LLMs
Improves cache hit rate for mobile LLM throughput
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Input Pruning for SwiGLU LLMs
Cache-aware masking boosts cache hits
LoRA adapters recover sparsification performance
🔎 Similar Papers
No similar papers found.
M
Marco Federici
Qualcomm AI Research, Qualcomm Technologies, Inc.
D
Davide Belli
Qualcomm AI Research, Qualcomm Technologies, Inc.
M
M. V. Baalen
Qualcomm AI Research, Qualcomm Technologies, Inc.
A
Amir Jalalirad
Andrii Skliar
Andrii Skliar
B
B. Major
Markus Nagel
Markus Nagel
Qualcomm AI Research
Machine learningDeep Learning
P
Paul N. Whatmough