🤖 AI Summary
For ultra-large-scale classification with millions of labels, existing dynamic sparse training (DST) methods face three critical challenges: explosive memory consumption in the classifier layer, gradient flow disruption due to sparsity, and poor convergence under extreme long-tail label distributions. This work is the first to introduce semi-structured sparse training and dynamic sparse optimization into extreme long-tail settings. We propose a gradient-bridging intermediate layer and a label-distribution-aware auxiliary objective to restore effective gradient guidance from the sparse classifier to the text encoder. Our method integrates semi-structured sparse matrix multiplication, sparse evolutionary training (SET), and dynamic mask updating. Experiments demonstrate a 3.2× reduction in GPU memory usage, a 1.8× speedup in training, and accuracy reaching 97.4% of its dense counterpart—enabling, for the first time, end-to-end training on consumer-grade hardware.
📝 Abstract
In recent years, Dynamic Sparse Training (DST) has emerged as an alternative to post-training pruning for generating efficient models. In principle, DST allows for a more memory efficient training process, as it maintains sparsity throughout the entire training run. However, current DST implementations fail to capitalize on this in practice. Because sparse matrix multiplication is much less efficient than dense matrix multiplication on GPUs, most implementations simulate sparsity by masking weights. In this paper, we leverage recent advances in semi-structured sparse training to apply DST in the domain of classification with large output spaces, where memory-efficiency is paramount. With a label space of possibly millions of candidates, the classification layer alone will consume several gigabytes of memory. Switching from a dense to a fixed fan-in sparse layer updated with sparse evolutionary training (SET); however, severely hampers training convergence, especially at the largest label spaces. We find that poor gradient flow from the sparse classifier to the dense text encoder make it difficult to learn good input representations. By employing an intermediate layer or adding an auxiliary training objective, we recover most of the generalisation performance of the dense model. Overall, we demonstrate the applicability and practical benefits of DST in a challenging domain -- characterized by a highly skewed label distribution that differs substantially from typical DST benchmark datasets -- which enables end-to-end training with millions of labels on commodity hardware.