🤖 AI Summary
This work investigates the poorly understood phenomenon wherein deep generative models often assign spuriously high likelihoods to out-of-distribution (OOD) data. By decoupling the network backbone from the density estimator, the authors propose Jacobian- and autoregressive-based density estimation approaches and systematically analyze how various architectures relate data complexity to estimated density. They reveal, for the first time, a consistent tendency across diverse models—including iGPT, PixelCNN++, Glow, diffusion models, DINOv2, and I-JEPA—to assign higher density estimates to low-complexity samples, irrespective of training objectives or density estimator types. Experiments on CIFAR-10, SVHN, and other benchmarks demonstrate a strong correlation between density estimates and external measures of data complexity, substantially expanding the empirical foundation for understanding OOD anomalies in deep learning.
📝 Abstract
Estimated density is often interpreted as indicating how typical a sample is under a model. Yet deep models trained on one dataset can assign \emph{higher} density to simpler out-of-distribution (OOD) data than to in-distribution test data. We refer to this behavior as the OOD anomaly. Prior work typically studies this phenomenon within a single architecture, detector, or benchmark, implicitly assuming certain canonical densities. We instead separate the trained network from the density estimator built from its representations or outputs. We introduce two estimators: Jacobian-based estimators and autoregressive self-estimators, making density analysis applicable to a wide range of models.
Applying this perspective to a range of models, including iGPT, PixelCNN++, Glow, score-based diffusion models, DINOv2, and I-JEPA, we find the same striking regularity that goes beyond the OOD anomaly: \textbf{lower-complexity samples receive higher estimated density, while higher-complexity samples receive lower estimated density}. This ordering appears within a test set and across OOD pairs such as CIFAR-10 and SVHN, and remains highly consistent across independently trained models. To quantify these orderings, we introduce Spearman rank correlation and find striking agreement both across models and with external complexity metrics. Even when trained only on the lowest-density (most complex) samples or \textbf{even a single such sample} the resulting models still rank simpler images as higher density.
These observations lead us beyond the original OOD anomaly to a more general conclusion: deep networks consistently favor simple data. Our goal is not to close this question, but to define and visualize it more clearly. We broaden its empirical scope and show that it appears across architectures, objectives, and density estimators.