FreeRide: Harvesting Bubbles in Pipeline Parallelism

📅 2024-09-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inherent bubble overhead—accounting for over 40% of training time and causing severe GPU underutilization—in large language model (LLM) pipeline parallel training, this paper proposes a lightweight, zero-intrusion bubble-aware scheduling framework. Built as a PyTorch extension, it enables runtime fine-grained bubble detection, dynamic CUDA stream preemption, resource quota enforcement, and a unified side-task interface, all while preserving primary training integrity. It introduces the first fine-grained GPU resource isolation mechanism, enabling concurrent execution of diverse side tasks—including graph analytics and image processing—alongside model training. Experiments demonstrate an average 7.8% reduction in total training cost, with only ~1% additional runtime overhead.

Technology Category

Application Category

📝 Abstract
The occurrence of bubbles in pipeline parallelism is an inherent limitation that can account for more than 40% of the large language model (LLM) training time and is one of the main reasons for the underutilization of GPU resources in LLM training. Harvesting these bubbles for GPU side tasks can increase resource utilization and reduce training costs but comes with challenges. First, because bubbles are discontinuous with various shapes, programming side tasks becomes difficult while requiring excessive engineering effort. Second, a side task can compete with pipeline training for GPU resources and incur significant overhead. To address these challenges, we propose FreeRide, a system designed to harvest bubbles in pipeline parallelism for side tasks. FreeRide provides programmers with interfaces to implement side tasks easily, manages bubbles and side tasks during pipeline training, and controls access to GPU resources by side tasks to reduce overhead. We demonstrate that FreeRide achieves 7.8% average cost savings with a negligible overhead of about 1% in training LLMs while serving model training, graph analytics, and image processing side tasks.
Problem

Research questions and friction points this paper is trying to address.

Addressing GPU underutilization in LLM training due to pipeline bubbles
Reducing engineering effort for programming discontinuous bubble side tasks
Minimizing overhead from side tasks competing with pipeline training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Harvests pipeline parallelism bubbles for side tasks
Provides easy interfaces for side task implementation
Manages GPU resource access to minimize overhead
🔎 Similar Papers
2024-06-07International Symposium on High-Performance Computer ArchitectureCitations: 5