BOA Constrictor: Squeezing Performance out of GPUs in the Cloud via Budget-Optimal Allocation

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical challenge of optimizing GPU resource allocation in cloud environments under a limited budget to balance the cost and performance of machine learning training. It formulates cloud-based GPU scheduling as a budget-constrained optimization problem and proposes BOA, an optimal allocation strategy that integrates queueing theory and optimization principles to model task scheduling. Leveraging empirical insights into GPU parallel efficiency, BOA employs an online algorithm to dynamically determine the number of GPUs to rent and allocate them among active training jobs. Experimental results demonstrate that, compared to existing heuristic schedulers, BOA reduces average job completion time by 1.6× in small-scale experiments and by 2× in large-scale simulations, substantially improving training efficiency under budget constraints.

Technology Category

Application Category

📝 Abstract
The past decade has seen a dramatic increase in demand for GPUs to train Machine Learning (ML) models. Because it is prohibitively expensive for most organizations to build and maintain a large GPU cluster, organizations instead choose to rent GPUs from cloud providers. The customer is responsible for devising a policy for (i) deciding how many GPUs to rent at every moment in time to process a stream of ML training jobs and (ii) allocating the rented GPUs among the currently active jobs in the system. Because ML training jobs can be parallelized across different numbers of GPUs, the customer generally has many options for how many GPUs to use for each job. Allocating more GPUs to a single training job will cause the job to complete more quickly. However, the customer pays for each GPU-hour they use, and a training job receives a diminishing marginal benefit from running on additional GPUs. Hence, allocating too many GPUs to a single training job can dramatically increase the overall cost that the customer pays to the cloud provider. This gives rise to a cost-performance tradeoff that customers must balance when running training jobs in the cloud. To balance the cost-performance tradeoff, we develop BOA Constrictor, a new scheduler for ML training jobs which uses a Budget-Optimal Allocation (BOA) policy to squeeze the highest level of performance out of a cloud-deployed GPU cluster given a fixed budget constraint. We explicitly formulate the problem as a budget-constrained scheduling problem and derive the BOA policy which minimizes the average job completion time (JCT) of a stream of arriving jobs subject to the user's budget. For a given budget level, we demonstrate that BOA Constrictor can reduce average JCT by 1.6 times in small-scale implementation experiments and by 2 times in detailed, large-scale simulations compared to state-of-the-art heuristic based schedulers.
Problem

Research questions and friction points this paper is trying to address.

GPU allocation
cost-performance tradeoff
cloud computing
ML training jobs
budget-constrained scheduling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Budget-Optimal Allocation
GPU scheduling
cost-performance tradeoff
job completion time
cloud ML training
🔎 Similar Papers
No similar papers found.