🤖 AI Summary
To address the low training efficiency, overconfident predictions, and lack of uncertainty quantification in CNNs under few-shot and noisy data regimes, this paper proposes a novel framework integrating topological-aware learning with Bayesian inference. Methodologically, it embeds persistent-homology-guided manifold structural priors into convolutional architectures and augments topological CNNs with dynamic prior adaptation, MCMC-based posterior sampling, and topological consistency regularization—yielding calibration-aware Bayesian learning. Experiments demonstrate that the proposed model significantly outperforms standard CNNs, Bayesian neural networks (BNNs), and existing topological CNNs in calibration error (ECE), out-of-distribution detection (AUROC), and few-shot robustness. Gains are particularly pronounced on CIFAR-10/100 and ImageNet subsets under data scarcity or label noise, establishing new state-of-the-art performance in uncertainty-aware visual recognition under challenging data conditions.
📝 Abstract
Convolutional neural networks (CNNs) have been established as the main workhorse in image data processing; nonetheless, they require large amounts of data to train, often produce overconfident predictions, and frequently lack the ability to quantify the uncertainty of their predictions. To address these concerns, we propose a new Bayesian topological CNN that promotes a novel interplay between topology-aware learning and Bayesian sampling. Specifically, it utilizes information from important manifolds to accelerate training while reducing calibration error by placing prior distributions on network parameters and properly learning appropriate posteriors. One important contribution of our work is the inclusion of a consistency condition in the learning cost, which can effectively modify the prior distributions to improve the performance of our novel network architecture. We evaluate the model on benchmark image classification datasets and demonstrate its superiority over conventional CNNs, Bayesian neural networks (BNNs), and topological CNNs. In particular, we supply evidence that our method provides an advantage in situations where training data is limited or corrupted. Furthermore, we show that the new model allows for better uncertainty quantification than standard BNNs since it can more readily identify examples of out-of-distribution data on which it has not been trained. Our results highlight the potential of our novel hybrid approach for more efficient and robust image classification.