Adaptive Sparse Softmax: An Effective and Efficient Softmax Variant

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Softmax coupled with cross-entropy suffers from a fundamental limitation: the predicted probability for the ground-truth class cannot theoretically reach unity, rendering the optimization objective unattainable and inducing training redundancy and overfitting. To address this, we propose Adaptive Sparse Softmax (AS-Softmax), which aligns training and inference objectives through three key innovations: (1) a sparse probability distribution that dynamically masks low-response classes to suppress irrelevant gradient interference; (2) an adaptive gradient accumulation mechanism prioritizing hard-to-classify samples; and (3) a feasible optimization target that improves convergence efficiency. Extensive experiments across text, image, and audio classification tasks demonstrate that AS-Softmax consistently outperforms standard Softmax and its major variants—achieving tighter loss–accuracy correlation, 1.2× faster training, and superior generalization.

Technology Category

Application Category

📝 Abstract
Softmax with the cross entropy loss is the standard configuration for current neural classification models. The gold score for a target class is supposed to be 1, but it is never reachable under the softmax schema. Such a problem makes the training process continue forever and leads to overfitting. Moreover, the "target-approach-1" training goal forces the model to continuously learn all samples, leading to a waste of time in handling some samples which have already been classified correctly with high confidence, while the test goal simply requires the target class of each sample to hold the maximum score. To solve the above weaknesses, we propose the Adaptive Sparse softmax (AS-Softmax) which designs a reasonable and test-matching transformation on top of softmax. For more purposeful learning, we discard the classes with far smaller scores compared with the actual class during training. Then the model could focus on learning to distinguish the target class from its strong opponents, which is also the great challenge in test. In addition, since the training losses of easy samples will gradually drop to 0 in AS-Softmax, we develop an adaptive gradient accumulation strategy based on the masked sample ratio to speed up training. We verify the proposed AS-Softmax on a variety of text multi-class, text multi-label, text token classification, image classification and audio classification tasks with class sizes ranging from 5 to 5000+. The results show that AS-Softmax consistently outperforms softmax and its variants, and the loss of AS-Softmax is remarkably correlated with classification performance in validation. Furthermore, adaptive gradient accumulation strategy can bring about 1.2x training speedup comparing with the standard softmax while maintaining classification effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Softmax training goal unreachable, causing overfitting and inefficiency
Wasted effort on correctly classified high-confidence samples
Need for test-aligned sparse learning and faster training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Sparse softmax for test-matching transformation
Discards classes with far smaller scores
Adaptive gradient accumulation speeds up training
🔎 Similar Papers
No similar papers found.
Q
Qi Lv
School of Computer Science and Technology, Soochow University and the School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen)
Lei Geng
Lei Geng
Professor
deeplearning
Ziqiang Cao
Ziqiang Cao
Soochow University
Natural Language Processing
M
Min Cao
School of Computer Science and Technology, Institution of Artificial Intelligence, Soochow University
S
Sujian Li
Peking University
W
Wenjie Li
Hong Kong Polytechnic University
G
Guohong Fu
School of Computer Science and Technology, Institution of Artificial Intelligence, Soochow University