PIPO: Pipelined Offloading for Efficient Inference on Consumer Devices

๐Ÿ“… 2025-03-15
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Deploying large language models (LLMs) on consumer-grade devicesโ€”e.g., laptops with 6 GB GPU memoryโ€”is hindered by severe GPU memory constraints and low hardware utilization. To address this, we propose a fine-grained offloading pipeline architecture that pioneers three synergistic techniques: (1) dynamic block-wise offloading leveraging GPU-CPU heterogeneous collaboration; (2) computation-communication overlap via pipelined scheduling; and (3) memory-aware, layer-granular task partitioning. These jointly enable efficient coordination between data movement and computation. Compared to conventional offloading approaches, our method substantially reduces GPU idle time and improves concurrent throughput. On an RTX 3060 GPU, it elevates average GPU utilization from below 40% to over 90%, and achieves up to 3.1ร— higher end-to-end inference throughput relative to baseline methods. The proposed system delivers a scalable, architecture-aware solution for efficient LLM deployment at the edge.

Technology Category

Application Category

๐Ÿ“ Abstract
The high memory and computation demand of large language models (LLMs) makes them challenging to be deployed on consumer devices due to limited GPU memory. Offloading can mitigate the memory constraint but often suffers from low GPU utilization, leading to low inference efficiency. In this work, we propose a novel framework, called pipelined offloading (PIPO), for efficient inference on consumer devices. PIPO designs a fine-grained offloading pipeline, complemented with optimized data transfer and computation, to achieve high concurrency and efficient scheduling for inference. Experimental results show that compared with state-of-the-art baseline, PIPO increases GPU utilization from below 40% to over 90% and achieves up to 3.1$ imes$ higher throughput, running on a laptop equipped with a RTX3060 GPU of 6GB memory.
Problem

Research questions and friction points this paper is trying to address.

Efficient LLM inference on memory-limited consumer devices
Low GPU utilization in traditional offloading methods
Optimizing data transfer and computation for high concurrency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained offloading pipeline for high concurrency
Optimized data transfer and computation scheduling
Boosts GPU utilization from 40% to over 90%
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yangyijian Liu
School of Computer Science, Nanjing University, China
J
Jun Li
School of Computer Science, Nanjing University, China
Wu-Jun Li
Wu-Jun Li
Nanjing University
Artificial IntelligenceMachine LearningBig Data