๐ค AI Summary
Deploying large language models (LLMs) on consumer-grade devicesโe.g., laptops with 6 GB GPU memoryโis hindered by severe GPU memory constraints and low hardware utilization. To address this, we propose a fine-grained offloading pipeline architecture that pioneers three synergistic techniques: (1) dynamic block-wise offloading leveraging GPU-CPU heterogeneous collaboration; (2) computation-communication overlap via pipelined scheduling; and (3) memory-aware, layer-granular task partitioning. These jointly enable efficient coordination between data movement and computation. Compared to conventional offloading approaches, our method substantially reduces GPU idle time and improves concurrent throughput. On an RTX 3060 GPU, it elevates average GPU utilization from below 40% to over 90%, and achieves up to 3.1ร higher end-to-end inference throughput relative to baseline methods. The proposed system delivers a scalable, architecture-aware solution for efficient LLM deployment at the edge.
๐ Abstract
The high memory and computation demand of large language models (LLMs) makes them challenging to be deployed on consumer devices due to limited GPU memory. Offloading can mitigate the memory constraint but often suffers from low GPU utilization, leading to low inference efficiency. In this work, we propose a novel framework, called pipelined offloading (PIPO), for efficient inference on consumer devices. PIPO designs a fine-grained offloading pipeline, complemented with optimized data transfer and computation, to achieve high concurrency and efficient scheduling for inference. Experimental results show that compared with state-of-the-art baseline, PIPO increases GPU utilization from below 40% to over 90% and achieves up to 3.1$ imes$ higher throughput, running on a laptop equipped with a RTX3060 GPU of 6GB memory.