🤖 AI Summary
To address the exacerbated DRAM bandwidth bottleneck in mobile LLM inference and the ineffectiveness of conventional dynamic sparsification on SwiGLU layers—due to their inherent lack of activation sparsity—this paper proposes predictor-free Dynamic Input Pruning (DIP), the first method enabling efficient sparsification of SwiGLU layers. DIP integrates lightweight LoRA adapters to compensate for accuracy loss and introduces a novel mask mechanism that jointly leverages cache state and activation magnitude to improve cache hit rates. Crucially, DIP requires no auxiliary prediction modules, maintains full compatibility with existing hardware stacks, and supports Flash-optimized streaming loading from flash storage. Evaluated on Phi-3-Medium, DIP reduces memory footprint by 46% and increases token generation throughput by 40% over baseline Flash loading, while incurring negligible perplexity degradation (<0.1). It significantly outperforms prior sparsification approaches in both efficiency and accuracy.
📝 Abstract
While mobile devices provide ever more compute power, improvements in DRAM bandwidth are much slower. This is unfortunate for large language model (LLM) token generation, which is heavily memory-bound. Previous work has proposed to leverage natural dynamic activation sparsity in ReLU-activated LLMs to reduce effective DRAM bandwidth per token. However, more recent LLMs use SwiGLU instead of ReLU, which results in little inherent sparsity. While SwiGLU activations can be pruned based on magnitude, the resulting sparsity patterns are difficult to predict, rendering previous approaches ineffective. To circumvent this issue, our work introduces Dynamic Input Pruning (DIP): a predictor-free dynamic sparsification approach, which preserves accuracy with minimal fine-tuning. DIP can further use lightweight LoRA adapters to regain some performance lost during sparsification. Lastly, we describe a novel cache-aware masking strategy, which considers the cache state and activation magnitude to further increase cache hit rate, improving LLM token rate on mobile devices. DIP outperforms other methods in terms of accuracy, memory and throughput trade-offs across simulated hardware settings. On Phi-3-Medium, DIP achieves a 46% reduction in memory and 40% increase in throughput with $<$ 0.1 loss in perplexity when compared to streaming the dense model from Flash. The open source code for HW simulator, methods, and experiments in this paper is available at https://github.com/Qualcomm-AI-research/dynamic-sparsity .