Characterizing Compute-Communication Overlap in GPU-Accelerated Distributed Deep Learning: Performance and Power Implications

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the dual impact of computation-communication overlap on performance and energy efficiency in GPU-accelerated distributed deep learning. Addressing the challenges of increased computational latency, resource contention, and elevated power consumption under aggressive overlap, we build a multi-GPU distributed training testbed to evaluate overlap behavior across varying numerical precisions, specialized compute units (e.g., Tensor Cores), and power capping regimes. Results show that while overlap improves average throughput by 10.2%, it incurs up to 40% higher computational latency and an average 18.9% slowdown in computation; significant energy-efficiency trade-offs emerge under power and frequency constraints. We propose a novel “balanced overlap” strategy, demonstrating that hardware-specific characteristics—rather than overlap per se—are the primary determinants of its efficacy. Our findings provide empirical evidence and design principles for energy-aware distributed training systems.

Technology Category

Application Category

📝 Abstract
This paper provides an in-depth characterization of GPU-accelerated systems, to understand the interplay between overlapping computation and communication which is commonly employed in distributed training settings. Due to the large size of models, distributing them across multiple devices is required. Overlapping strategies, which enable concurrent computation and communication, are critical for mitigating communication bottlenecks and maximizing GPU utilization. However, the current consensus is that we should always and aggressively overlap compute and communication to mitigate the overhead of distribution. By systematically evaluating state-of-the-art GPUs, this study investigates the impact of hardware features such as numeric precision, specialized cores, and power capping on distributed training workloads. Comprehensive experiments and studies showcase the effects of overlapping strategies on performance and power consumption across varying scenarios. We observe that overlapping computation and communication can result in an average computational slowdown of 18.9%, with a maximum of 40.0% slowdown. This slowdown is in comparison to the scenario when no communication was happening with the compute. We consider this an ideal execution scenario, where the communication in parallel has not impact on the compute time. However, performing computation and communication sequentially is, on average, 10.2% slower than overlapped execution, with a maximum slowdown of 26.6%. We further observe, while specialized datapath and optimized numeric precision mitigate certain slowdowns, overlapping execution can lead to resource contention and also increase power consumption under specific configurations. The analysis also uncovers trade-offs introduced by power and frequency capping, emphasizing the importance of balanced strategies to optimize energy efficiency and training throughput.
Problem

Research questions and friction points this paper is trying to address.

Analyzes compute-communication overlap impact on GPU-accelerated distributed deep learning.
Investigates hardware features' effect on performance and power in distributed training.
Explores trade-offs between overlapping strategies, resource contention, and energy efficiency.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Overlapping computation and communication in GPUs
Evaluating hardware impact on distributed training
Balancing power capping and performance trade-offs
🔎 Similar Papers
No similar papers found.