🤖 AI Summary
This work addresses the issue of overconfidence in deep neural networks when presented with out-of-distribution (OOD) inputs, which undermines their reliability in open-world settings. The authors propose OTIS, a novel method that leverages the singularity boundary of semi-discrete optimal transport (OT) to generate OOD samples with semantic ambiguity in the latent space. By introducing a confidence suppression loss, OTIS encourages the model to produce better-calibrated predictions in regions of structural uncertainty. This approach achieves a principled, geometry-aware suppression of overconfidence through the intrinsic geometric structure induced by optimal transport. Extensive experiments demonstrate that OTIS significantly outperforms current state-of-the-art methods across multiple OOD detection benchmarks.
📝 Abstract
Deep neural networks (DNNs) often produce overconfident predictions on out-of-distribution (OOD) inputs, undermining their reliability in open-world environments. Singularities in semi-discrete optimal transport (OT) mark regions of semantic ambiguity, where classifiers are particularly prone to unwarranted high-confidence predictions. Motivated by this observation, we propose a principled framework to mitigate OOD overconfidence by leveraging the geometry of OT-induced singular boundaries. Specifically, we formulate an OT problem between a continuous base distribution and the latent embeddings of training data, and identify the resulting singular boundaries. By sampling near these boundaries, we construct a class of OOD inputs, termed optimal transport-induced OOD samples (OTIS), which are geometrically grounded and inherently semantically ambiguous. During training, a confidence suppression loss is applied to OTIS to guide the model toward more calibrated predictions in structurally uncertain regions. Extensive experiments show that our method significantly alleviates OOD overconfidence and outperforms state-of-the-art methods.