🤖 AI Summary
This work addresses the challenges of distributed large-batch training, where naively increasing batch size often leads to excessive communication overhead, resource constraints, and degraded generalization, making it difficult to balance training efficiency with model quality. The paper proposes a unified optimization framework that, for the first time, enables online joint tuning of training time, cost, and generalization performance. By integrating parallel system modeling with statistical performance prediction, the method dynamically determines the optimal batch size, significantly improving efficiency while preserving convergence guarantees. Empirical evaluations across multiple vision tasks demonstrate prediction errors within 7.5%–14%, achieving up to 20× speedup over standard large-batch training and an average 9% improvement in test accuracy.
📝 Abstract
Distributed training increases the number of batches processed per iteration either by scaling-out (adding more nodes) or scaling-up (increasing the batch-size). However, the largest configuration does not necessarily yield the best performance. Horizontal scaling introduces additional communication overhead, while vertical scaling is constrained by computation cost and device memory limits. Thus, simply increasing the batch-size leads to diminishing returns: training time and cost decrease initially but eventually plateaus, creating a knee-point in the time/cost versus batch-size pareto curve. The optimal batch-size therefore depends on the underlying model, data and available compute resources. Large batches also suffer from worse model quality due to the well-known generalization gap. In this paper, we present Tula, an online service that automatically optimizes time, cost, and convergence quality for large-batch training of convolutional models. It combines parallel-systems modeling with statistical performance prediction to identify the optimal batch-size. Tula predicts training time and cost within 7.5-14% error across multiple models, and achieves up to 20x overall speedup and improves test accuracy by 9% on average over standard large-batch training on various vision tasks, thus successfully mitigating the generalization gap and accelerating training at the same time.