π€ AI Summary
In community-based heterogeneous GPU networks, high hardware/software diversity, low resource availability, and dynamically varying network conditions severely degrade the efficiency of conventional schedulers. To address this, we propose REACHβa novel scheduling framework that, for the first time, formulates task scheduling as a sequence scoring problem, integrating Transformer architectures with deep reinforcement learning to jointly optimize performance, reliability, cost, and network efficiency. REACH jointly models global GPU state awareness and task requirements to enable adaptive, co-located placement of computation and data. Experimental results in simulation demonstrate that REACH improves overall task completion rate by 17%, increases success rate for high-priority tasks by over 100%, and reduces bandwidth overhead by more than 80%. Moreover, it exhibits strong robustness under stress and excellent scalability in large-scale deployments.
π Abstract
Community GPU platforms are emerging as a cost-effective and democratized alternative to centralized GPU clusters for AI workloads, aggregating idle consumer GPUs from globally distributed and heterogeneous environments. However, their extreme hardware/software diversity, volatile availability, and variable network conditions render traditional schedulers ineffective, leading to suboptimal task completion. In this work, we present REACH (Reinforcement Learning for Efficient Allocation in Community and Heterogeneous Networks), a Transformer-based reinforcement learning framework that redefines task scheduling as a sequence scoring problem to balance performance, reliability, cost, and network efficiency. By modeling both global GPU states and task requirements, REACH learns to adaptively co-locate computation with data, prioritize critical jobs, and mitigate the impact of unreliable resources. Extensive simulation results show that REACH improves task completion rates by up to 17%, more than doubles the success rate for high-priority tasks, and reduces bandwidth penalties by over 80% compared to state-of-the-art baselines. Stress tests further demonstrate its robustness to GPU churn and network congestion, while scalability experiments confirm its effectiveness in large-scale, high-contention scenarios.