🤖 AI Summary
Deep generative models (DGMs) often underperform in out-of-distribution (OOD) detection due to overfitting and post-convergence degradation of discriminative capability. This work challenges the conventional assumption that models must be fully converged before being deployed for OOD detection, revealing that *immature* DGMs—trained with early stopping—can outperform their fully converged counterparts on image-based OOD detection. To harness this phenomenon, we propose a model-agnostic method that jointly leverages layer-wise gradient norm analysis and density estimation, integrating early stopping with a typicality-based framework. We further establish a “support set overlap” theory to characterize the underlying mechanism. Extensive experiments across multiple image benchmarks and likelihood-based DGMs demonstrate that our approach significantly surpasses baselines—including standard typicality tests—in OOD detection accuracy and robustness, while reducing training overhead by over 40%.
📝 Abstract
Likelihood-based deep generative models (DGMs) have gained significant attention for their ability to approximate the distributions of high-dimensional data. However, these models lack a performance guarantee in assigning higher likelihood values to in-distribution (ID) inputs, data the models are trained on, compared to out-of-distribution (OOD) inputs. This counter-intuitive behaviour is particularly pronounced when ID inputs are more complex than OOD data points. One potential approach to address this challenge involves leveraging the gradient of a data point with respect to the parameters of the DGMs. A recent OOD detection framework proposed estimating the joint density of layer-wise gradient norms for a given data point as a model-agnostic method, demonstrating superior performance compared to the Typicality Test across likelihood-based DGMs and image dataset pairs. In particular, most existing methods presuppose access to fully converged models, the training of which is both time-intensive and computationally demanding. In this work, we demonstrate that using immature models,stopped at early stages of training, can mostly achieve equivalent or even superior results on this downstream task compared to mature models capable of generating high-quality samples that closely resemble ID data. This novel finding enhances our understanding of how DGMs learn the distribution of ID data and highlights the potential of leveraging partially trained models for downstream tasks. Furthermore, we offer a possible explanation for this unexpected behaviour through the concept of support overlap.