GOGH: Correlation-Guided Orchestration of GPUs in Heterogeneous Clusters

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low resource utilization, high energy consumption, and poor cross-hardware generalization in heterogeneous GPU clusters, this paper proposes a learning-based online resource scheduling architecture. The method employs a dual-neural-network framework that jointly models initial resource assessment, co-location interference prediction, and dynamic allocation optimization; it further introduces a novel correlation-guided mechanism that leverages runtime feedback to continuously refine performance prediction accuracy. The approach natively supports mixed-generation GPU environments and adapts online to hardware heterogeneity and workload dynamics. Experimental results demonstrate that, while meeting SLA constraints, the proposed scheduler reduces energy consumption by 19.3% and improves resource allocation efficiency by 27.6% over conventional strategies. Moreover, it exhibits time-evolving generalization capability—significantly enhancing both energy efficiency and adaptability in large-scale deep learning clusters.

Technology Category

Application Category

📝 Abstract
The growing demand for computational resources in machine learning has made efficient resource allocation a critical challenge, especially in heterogeneous hardware clusters where devices vary in capability, age, and energy efficiency. Upgrading to the latest hardware is often infeasible, making sustainable use of existing, mixed-generation resources essential. In this paper, we propose a learning-based architecture for managing machine learning workloads in heterogeneous clusters. The system operates online, allocating resources to incoming training or inference requests while minimizing energy consumption and meeting performance requirements. It uses two neural networks: the first provides initial estimates of how well a new model will utilize different hardware types and how it will affect co-located models. An optimizer then allocates resources based on these estimates. After deployment, the system monitors real performance and uses this data to refine its predictions via a second neural network. This updated model improves estimates not only for the current hardware but also for hardware not initially allocated and for co-location scenarios not yet observed. The result is an adaptive, iterative approach that learns over time to make more effective resource allocation decisions in heterogeneous deep learning clusters.
Problem

Research questions and friction points this paper is trying to address.

Optimizing resource allocation in heterogeneous GPU clusters
Managing ML workloads while minimizing energy consumption
Improving performance predictions for co-located models on mixed hardware
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses two neural networks for resource allocation
Learns from real performance to refine predictions
Adapts iteratively for heterogeneous hardware optimization
🔎 Similar Papers
No similar papers found.