Normalized Conditional Mutual Information Surrogate Loss for Deep Neural Classifiers

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the misalignment between the conventional cross-entropy loss and the ultimate goal of classification accuracy. To bridge this gap, the authors propose using normalized conditional mutual information (NCMI) as a differentiable surrogate loss, employing it for the first time in end-to-end training. They introduce an alternating optimization algorithm to efficiently minimize NCMI, thereby establishing a direct link between an information-theoretic metric and classification performance. The method is designed as a plug-and-play replacement for cross-entropy with comparable computational overhead. Empirical results demonstrate consistent improvements across diverse architectures and batch sizes: on ImageNet, ResNet-50 achieves a 2.77% gain in Top-1 accuracy, and on the CAMELYON-17 dataset, macro F1 score increases by 8.6%.

Technology Category

Application Category

📝 Abstract
In this paper, we propose a novel information theoretic surrogate loss; normalized conditional mutual information (NCMI); as a drop in alternative to the de facto cross-entropy (CE) for training deep neural network (DNN) based classifiers. We first observe that the model's NCMI is inversely proportional to its accuracy. Building on this insight, we introduce an alternating algorithm to efficiently minimize the NCMI. Across image recognition and whole-slide imaging (WSI) subtyping benchmarks, NCMI-trained models surpass state of the art losses by substantial margins at a computational cost comparable to that of CE. Notably, on ImageNet, NCMI yields a 2.77% top-1 accuracy improvement with ResNet-50 comparing to the CE; on CAMELYON-17, replacing CE with NCMI improves the macro-F1 by 8.6% over the strongest baseline. Gains are consistent across various architectures and batch sizes, suggesting that NCMI is a practical and competitive alternative to CE.
Problem

Research questions and friction points this paper is trying to address.

surrogate loss
deep neural classifiers
cross-entropy
classification accuracy
information theory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Normalized Conditional Mutual Information
Surrogate Loss
Deep Neural Classifiers
Cross-Entropy Alternative
Information-Theoretic Optimization
🔎 Similar Papers
No similar papers found.
Linfeng Ye
Linfeng Ye
University of Toronto
Information TheoryComputer VisionComputational Pathology
Zhixiang Chi
Zhixiang Chi
University of Toronto
Computer VisionMachine Learning
K
Konstantinos N. Plataniotis
The Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, Canada
E
En-hui Yang
Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Canada