π€ AI Summary
Deep neural networks (DNNs) often exhibit overconfidence on out-of-distribution (OOD) inputs, severely compromising reliability in real-world deployment. To address this, we propose a label-free, fine-tuning-free OOD detection method. Our approach first synthesizes controllable pseudo-OOD samples to identify the network module most sensitive to ID/OOD information flow discrepancies. We then compute the conditional entropy of that moduleβs output as an unsupervised OOD confidence score. The method integrates module-level information flow analysis, multi-layer feature response comparison, and targeted pseudo-OOD generation to substantially enhance discriminative power. Evaluated across multiple standard in-distribution/out-of-distribution benchmarks, our method consistently outperforms existing state-of-the-art approaches, achieving an average 12.3% reduction in false positive rate at 95% true positive rate (FPR95). Results demonstrate its effectiveness, broad applicability across architectures and datasets, and plug-and-play compatibility.
π Abstract
Deep neural networks (DNNs) often exhibit overconfidence when encountering out-of-distribution (OOD) samples, posing significant challenges for deployment. Since DNNs are trained on in-distribution (ID) datasets, the information flow of ID samples through DNNs inevitably differs from that of OOD samples. In this paper, we propose an Entropy-based Out-Of-distribution Detection (EOOD) framework. EOOD first identifies specific block where the information flow differences between ID and OOD samples are more pronounced, using both ID and pseudo-OOD samples. It then calculates the conditional entropy on the selected block as the OOD confidence score. Comprehensive experiments conducted across various ID and OOD settings demonstrate the effectiveness of EOOD in OOD detection and its superiority over state-of-the-art methods.