🤖 AI Summary
Diffusion models (DMs) implicitly learn local intrinsic dimensionality (LID) on low-dimensional manifold data, but existing LID estimators—particularly FLIPD—lack rigorous theoretical foundations, being provably consistent only under unrealistic affine submanifold assumptions.
Method: We propose a noise-weighted LID estimation framework that exploits the sensitivity of marginal densities to noise strength during diffusion. Our approach extends FLIPD to general smooth manifolds and accommodates uniform convolutional noise.
Contribution/Results: This work provides the first rigorous proof of FLIPD’s convergence and consistency under realistic smooth manifold assumptions. We generalize the theory to broader noise settings and empirically validate the estimator’s robustness and state-of-the-art accuracy on real-world datasets. The improved LID estimation enables more reliable quantification of data complexity, enhances anomaly detection performance, and supports trustworthy AIGC identification—offering both theoretical guarantees and practical utility for downstream applications.
📝 Abstract
The manifold hypothesis asserts that data of interest in high-dimensional ambient spaces, such as image data, lies on unknown low-dimensional submanifolds. Diffusion models (DMs) -- which operate by convolving data with progressively larger amounts of Gaussian noise and then learning to revert this process -- have risen to prominence as the most performant generative models, and are known to be able to learn distributions with low-dimensional support. For a given datum in one of these submanifolds, we should thus intuitively expect DMs to have implicitly learned its corresponding local intrinsic dimension (LID), i.e. the dimension of the submanifold it belongs to. Kamkari et al. (2024b) recently showed that this is indeed the case by linking this LID to the rate of change of the log marginal densities of the DM with respect to the amount of added noise, resulting in an LID estimator known as FLIPD. LID estimators such as FLIPD have a plethora of uses, among others they quantify the complexity of a given datum, and can be used to detect outliers, adversarial examples and AI-generated text. FLIPD achieves state-of-the-art performance at LID estimation, yet its theoretical underpinnings are incomplete since Kamkari et al. (2024b) only proved its correctness under the highly unrealistic assumption of affine submanifolds. In this work we bridge this gap by formally proving the correctness of FLIPD under realistic assumptions. Additionally, we show that an analogous result holds when Gaussian convolutions are replaced with uniform ones, and discuss the relevance of this result.