🤖 AI Summary
To address gradient conflicts and degraded generalization caused by naive gradient averaging in distributed multi-task training, this paper proposes Gradient Agreement Filtering (GAF). GAF dynamically identifies and discards micro-batch gradients exhibiting directional conflict—measured via cosine similarity—and aggregates only those gradients with consistent directions for model updates. It is the first method to explicitly adopt gradient directional consistency as a filtering criterion, effectively mitigating overfitting and memorization of noisy labels induced by gradient orthogonality or negative correlation in late-stage training. Evaluated on CIFAR-100 and CIFAR-100N-Fine, GAF achieves up to an 18.2% improvement in validation accuracy. Moreover, it enables stable training with significantly smaller batch sizes and reduces computational overhead by nearly an order of magnitude compared to standard distributed averaging baselines.
📝 Abstract
We introduce Gradient Agreement Filtering (GAF) to improve on gradient averaging in distributed deep learning optimization. Traditional distributed data-parallel stochastic gradient descent involves averaging gradients of microbatches to calculate a macrobatch gradient that is then used to update model parameters. We find that gradients across microbatches are often orthogonal or negatively correlated, especially in late stages of training, which leads to memorization of the training set, reducing generalization. In this paper, we introduce a simple, computationally effective way to reduce gradient variance by computing the cosine distance between micro-gradients during training and filtering out conflicting updates prior to averaging. We improve validation accuracy with significantly smaller microbatch sizes. We also show this reduces memorizing noisy labels. We demonstrate the effectiveness of this technique on standard image classification benchmarks including CIFAR-100 and CIFAR-100N-Fine. We show this technique consistently outperforms validation accuracy, in some cases by up to 18.2% compared to traditional training approaches while reducing the computation required nearly an order of magnitude because we can now rely on smaller microbatch sizes without destabilizing training.