🤖 AI Summary
Distributed deep learning training often suffers from elevated latency and slowed convergence due to network congestion. While existing gradient compression techniques reduce communication overhead, they frequently incur non-negligible accuracy degradation. This paper proposes a network-state-driven conditional gradient compression mechanism: it dynamically monitors real-time bandwidth, latency, and queue length, and triggers lightweight compression operations—such as quantization and structured pruning—only when congestion significantly impedes convergence, thereby avoiding the redundancy and distortion inherent in global static or periodic compression schemes. To the best of our knowledge, this is the first approach to jointly optimize network state awareness, convergence performance, and communication efficiency. Experiments under bandwidth-constrained settings demonstrate a 1.55×–9.84× improvement in training throughput, substantially reduced convergence time, and near-zero model accuracy loss.
📝 Abstract
Training large-scale distributed machine learning models imposes considerable demands on network infrastructure, often resulting in sudden traffic spikes that lead to congestion, increased latency, and reduced throughput, which would ultimately affect convergence times and overall training performance. While gradient compression techniques are commonly employed to alleviate network load, they frequently compromise model accuracy due to the loss of gradient information. This paper introduces NetSenseML, a novel network adaptive distributed deep learning framework that dynamically adjusts quantization, pruning, and compression strategies in response to real-time network conditions. By actively monitoring network conditions, NetSenseML applies gradient compression only when network congestion negatively impacts convergence speed, thus effectively balancing data payload reduction and model accuracy preservation. Our approach ensures efficient resource usage by adapting reduction techniques based on current network conditions, leading to shorter convergence times and improved training efficiency. We present the design of the NetSenseML adaptive data reduction function and experimental evaluations show that NetSenseML can improve training throughput by a factor of 1.55 to 9.84 times compared to state-of-the-art compression-enabled systems for representative DDL training jobs in bandwidth-constrained conditions.