🤖 AI Summary
To address the dual bottlenecks of GPU memory consumption and computational overhead in deep neural network training, this paper proposes the first adaptive framework that jointly optimizes mixed-precision arithmetic, sparse second-order information (Hessian/Fisher), and elastic batch sizing. Grounded in curvature-aware analysis—leveraging Hessian/Fisher sparsity and gradient variance—the method employs custom Triton kernels to enable end-to-end, hardware-aware, dynamic scheduling of layer-wise precision, learning rates, and batch sizes—eliminating manual hyperparameter tuning. Crucially, it unifies the coupled interactions among these three acceleration strategies within a single optimization model. Evaluated on ResNet-18 and EfficientNet-B0, the approach achieves up to 9.9% training speedup, 13.3% memory reduction (e.g., from 0.35 GB to 0.31 GB), and a 1.1-percentage-point accuracy gain, demonstrating both the effectiveness and generalizability of multi-dimensional resource co-optimization.
📝 Abstract
Deep neural networks are increasingly bottlenecked by the cost of optimization, both in terms of GPU memory and compute time. Existing acceleration techniques, such as mixed precision, second-order methods, and batch size scaling, are typically used in isolation. We present Tri-Accel, a unified optimization framework that co-adapts three acceleration strategies along with adaptive parameters during training: (1) Precision-Adaptive Updates that dynamically assign mixed-precision levels to layers based on curvature and gradient variance; (2) Sparse Second-Order Signals that exploit Hessian/Fisher sparsity patterns to guide precision and step size decisions; and (3) Memory-Elastic Batch Scaling that adjusts batch size in real time according to VRAM availability. On CIFAR-10 with ResNet-18 and EfficientNet-B0, Tri-Accel achieves up to 9.9% reduction in training time and 13.3% lower memory usage, while improving accuracy by +1.1 percentage points over FP32 baselines. Tested on CIFAR-10/100, our approach demonstrates adaptive learning behavior, with efficiency gradually improving over the course of training as the system learns to allocate resources more effectively. Compared to static mixed-precision training, Tri-Accel maintains 78.1% accuracy while reducing memory footprint from 0.35GB to 0.31GB on standard hardware. The framework is implemented with custom Triton kernels, whose hardware-aware adaptation enables automatic optimization without manual hyperparameter tuning, making it practical for deployment across diverse computational environments. This work demonstrates how algorithmic adaptivity and hardware awareness can be combined to improve scalability in resource-constrained settings, paving the way for more efficient neural network training on edge devices and cost-sensitive cloud deployments.