🤖 AI Summary
This work addresses the misalignment between the conventional cross-entropy loss and the ultimate goal of classification accuracy. To bridge this gap, the authors propose using normalized conditional mutual information (NCMI) as a differentiable surrogate loss, employing it for the first time in end-to-end training. They introduce an alternating optimization algorithm to efficiently minimize NCMI, thereby establishing a direct link between an information-theoretic metric and classification performance. The method is designed as a plug-and-play replacement for cross-entropy with comparable computational overhead. Empirical results demonstrate consistent improvements across diverse architectures and batch sizes: on ImageNet, ResNet-50 achieves a 2.77% gain in Top-1 accuracy, and on the CAMELYON-17 dataset, macro F1 score increases by 8.6%.
📝 Abstract
In this paper, we propose a novel information theoretic surrogate loss; normalized conditional mutual information (NCMI); as a drop in alternative to the de facto cross-entropy (CE) for training deep neural network (DNN) based classifiers. We first observe that the model's NCMI is inversely proportional to its accuracy. Building on this insight, we introduce an alternating algorithm to efficiently minimize the NCMI. Across image recognition and whole-slide imaging (WSI) subtyping benchmarks, NCMI-trained models surpass state of the art losses by substantial margins at a computational cost comparable to that of CE. Notably, on ImageNet, NCMI yields a 2.77% top-1 accuracy improvement with ResNet-50 comparing to the CE; on CAMELYON-17, replacing CE with NCMI improves the macro-F1 by 8.6% over the strongest baseline. Gains are consistent across various architectures and batch sizes, suggesting that NCMI is a practical and competitive alternative to CE.