PARALLELGPUOS: A Concurrent OS-level GPU Checkpoint and Restore System using Validated Speculation

📅 2024-05-20
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Existing CPU-oriented concurrent checkpoint/restart (C/R) techniques cannot be directly ported to GPUs due to the absence of hardware/OS support for dirty-page tracking and copy-on-write. Method: This paper presents the first OS-level, transparent, application-agnostic concurrent GPU C/R system. Its core innovations include: (1) runtime-validated speculative extraction of kernel buffer semantics, achieving 100% accurate access-pattern identification; (2) the first OS-level seamless overlap of C/R with active GPU computation; and (3) a tightly integrated design combining kernel extensions, GPU driver collaboration, semantic-driven memory consistency control, and asynchronous context snapshotting. Results: Evaluated on CV, LLM, and reinforcement learning workloads, the system improves fault tolerance during training, real-time migration, and serverless cold-start latency by one to two orders of magnitude over prior OS-level GPU C/R approaches.

Technology Category

Application Category

📝 Abstract
Checkpointing (C) and restoring (R) are key components for GPU tasks. POS is an OS-level GPU C/R system: It can transparently checkpoint or restore processes that use the GPU, without requiring any cooperation from the application, a key feature required by modern systems like the cloud. Moreover, POS is the first OS-level C/R system that can concurrently execute C/R with the application execution: a critical feature that can be trivially achieved when the processes only running on the CPU, but becomes challenging when the processes use GPU. The problem is how to ensure consistency during concurrent execution with the lack of application semantics due to transparency. CPU processes can leverage OS and hardware paging to fix inconsistency without application semantics. Unfortunately, GPU bypasses OS and paging for high performance. POS fills the semantic gap by speculatively extracting buffer access information of GPU kernels during runtime. Thanks to the simple and well-structured nature of GPU kernels, our speculative extraction (with runtime validation) achieves 100% accuracy on applications from training to inference whose domains span from vision, large language models, and reinforcement learning. Based on the extracted semantics, we systematically overlap C/R with application execution, and achieves orders of magnitude higher performance under various tasks compared with the state-of-the-art OS-level GPU C/R, including training fault tolerance, live GPU process migration, and cold starts acceleration in GPU-based serverless computing.
Problem

Research questions and friction points this paper is trying to address.

Enables concurrent GPU process checkpoint and restore
Addresses GPU memory access tracing challenges
Improves fault tolerance and process migration performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses validated speculation for GPU memory access
Implements software-based concurrent C/R techniques
Enhances efficiency with GPU-aware checkpoint methods
🔎 Similar Papers
No similar papers found.
Z
Zhuobin Huang
Institute of Parallel and Distributed Systems, SEIEE, Shanghai Jiao Tong University
Xingda Wei
Xingda Wei
Shanghai Jiao Tong University
System for AIDistributed systemOperating system
Y
Yingyi Hao
Institute of Parallel and Distributed Systems, SEIEE, Shanghai Jiao Tong University
R
Rong Chen
Institute of Parallel and Distributed Systems, SEIEE, Shanghai Jiao Tong University
Mingcong Han
Mingcong Han
Shanghai Jiao Tong University
computer system
Jinyu Gu
Jinyu Gu
Shanghai Jiao Tong University
Operating SystemSystem SecurityVirtualization
H
Haibo Chen
Institute of Parallel and Distributed Systems, SEIEE, Shanghai Jiao Tong University