🤖 AI Summary
To address excessive activation storage overhead in DNN training on memory-constrained mobile/edge devices, this paper proposes a dynamic activation compression framework co-designed for SoC memory hierarchies. Our method integrates three key techniques: (1) hybrid reduction operations with CPU–GPU collaborative bit-packing to minimize data transfer and storage redundancy; (2) importance-aware paged memory management to mitigate fragmentation and improve memory access efficiency; and (3) runtime-adaptive quantization coupled with gradient-fidelity recovery to preserve training stability. Evaluated across multiple models and devices, our approach achieves up to 22.9× activation memory compression and 3.2× training speedup—without accuracy loss. This work is the first to deeply unify heterogeneous collaborative compression, hierarchical memory scheduling, and dynamic quantization, significantly enhancing end-to-end training scalability and practicality under severe resource constraints.
📝 Abstract
Recent advancements in on-device training for deep neural networks have underscored the critical need for efficient activation compression to overcome the memory constraints of mobile and edge devices. As activations dominate memory usage during training and are essential for gradient computation, compressing them without compromising accuracy remains a key research challenge. While existing methods for dynamic activation quantization promise theoretical memory savings, their practical deployment is impeded by system-level challenges such as computational overhead and memory fragmentation.
To address these challenges, we introduce DAF, a Dynamic Activation Framework that enables scalable and efficient on-device training through system-level optimizations. DAF achieves both memory- and time-efficient dynamic quantization training by addressing key system bottlenecks. It develops hybrid reduction operations tailored to the memory hierarchies of mobile and edge SoCs, leverages collaborative CPU-GPU bit-packing for efficient dynamic quantization, and implements an importance-aware paging memory management scheme to reduce fragmentation and support dynamic memory adjustments.
These optimizations collectively enable DAF to achieve substantial memory savings and speedup without compromising model training accuracy. Evaluations on various deep learning models across embedded and mobile platforms demonstrate up to a $22.9 imes$ reduction in memory usage and a $3.2 imes$ speedup, making DAF a scalable and practical solution for resource-constrained environments.