🤖 AI Summary
To address the I/O latency bottleneck in large language model (LLM) inference on smartphones—primarily caused by high-latency flash storage—this work proposes the first flash-level storage layout optimization method leveraging neuron co-activation patterns. Our approach comprises offline analysis of neuron activation correlations, followed by a two-stage weight reordering and online cache scheduling strategy. It further integrates flash-aware access policies and a sparsity-driven dynamic DRAM loading mechanism to significantly improve data access locality and sequentiality. This work establishes a novel algorithm–storage co-design paradigm guided by activation sparsity. Evaluated across multiple smartphone platforms and mainstream LLMs, it achieves up to 5.93× reduction in I/O latency—substantially outperforming state-of-the-art methods.
📝 Abstract
Large Language Models (LLMs) have achieved remarkable success across various domains, yet deploying them on mobile devices remains an arduous challenge due to their extensive computational and memory demands. While lightweight LLMs have been developed to fit mobile environments, they suffer from degraded model accuracy. In contrast, sparsity-based techniques minimize DRAM usage by selectively transferring only relevant neurons to DRAM while retaining the full model in external storage, such as flash. However, such approaches are critically limited by numerous I/O operations, particularly on smartphones with severe IOPS constraints. In this paper, we propose Ripple, a novel approach that accelerates LLM inference on smartphones by optimizing neuron placement in flash memory. Ripple leverages the concept of Neuron Co-Activation, where neurons frequently activated together are linked to facilitate continuous read access and optimize data transfer efficiency. Our approach incorporates a two-stage solution: an offline stage that reorganizes neuron placement based on co-activation patterns, and an online stage that employs tailored data access and caching strategies to align well with hardware characteristics. Evaluations conducted on a variety of smartphones and LLMs demonstrate that Ripple achieves up to 5.93x improvements in I/O latency compared to the state-of-the-art. As the first solution to optimize storage placement under sparsity, Ripple explores a new optimization space at the intersection of sparsity-driven algorithm and storage-level system co-design in LLM inference.