🤖 AI Summary
To address insufficient feature discriminability in deep neural networks, this paper proposes a novel large-margin discriminative loss function that explicitly models intra-class compactness and inter-class minimum margin distance—marking the first integration of explicit margin constraints into the loss term. Theoretical analysis uncovers the coupling mechanism between compactness and margin components, guiding the design of a partial momentum update strategy that balances optimization stability and parameter consistency. Further, gradient property analysis and generalization error bound derivation provide rigorous theoretical justification. Extensive experiments on multiple benchmark datasets demonstrate that the proposed method significantly improves classification accuracy, effectively mitigates degenerate solutions, and enhances model generalization capability.
📝 Abstract
In this paper, we introduce a novel discriminative loss function with large margin in the context of Deep Learning. This loss boosts the discriminative power of neural networks, represented by intra-class compactness and inter-class separability. On the one hand, the class compactness is ensured by close distance of samples of the same class to each other. On the other hand, the inter-class separability is boosted by a margin loss that ensures the minimum distance of each class to its closest boundary. All the terms in our loss have an explicit meaning, giving a direct view of the obtained feature space. We analyze mathematically the relation between compactness and margin term, giving a guideline about the impact of the hyper-parameters on the learned features. Moreover, we also analyze properties of the gradient of the loss with respect to the parameters of the neural network. Based on this, we design a strategy called partial momentum updating that enjoys simultaneously stability and consistency in training. Furthermore, we provide theoretical insights explaining why our method can avoid trivial solutions that do not improve the generalization capability of the model. Besides, we also investigate generalization errors to have better theoretical insights. The experiments show promising results of our method.