🤖 AI Summary
Normalizing flows suffer from limited expressiveness and poor interpretability in high-dimensional density estimation and generative modeling. To address this, we propose Fractal Flow—a reversible generative model that integrates topic modeling with a fractal-inspired recursive architecture. Methodologically, we couple Latent Dirichlet Allocation (LDA) with a Kolmogorov–Arnold network to construct a semantically interpretable latent space; additionally, we design a fractal-motivated recursive invertible module that enhances modeling capacity via hierarchical Jacobian transformations. Experiments on MNIST, Fashion-MNIST, CIFAR-10, and geophysical datasets demonstrate significant improvements in density estimation accuracy. Moreover, the model enables semantic clustering in latent space and fine-grained controllable generation. By unifying expressive power with structural transparency, Fractal Flow achieves both state-of-the-art performance and intrinsic interpretability—advancing the frontier of principled, human-understandable deep generative modeling.
📝 Abstract
Normalizing Flows provide a principled framework for high-dimensional density estimation and generative modeling by constructing invertible transformations with tractable Jacobian determinants. We propose Fractal Flow, a novel normalizing flow architecture that enhances both expressiveness and interpretability through two key innovations. First, we integrate Kolmogorov-Arnold Networks and incorporate Latent Dirichlet Allocation into normalizing flows to construct a structured, interpretable latent space and model hierarchical semantic clusters. Second, inspired by Fractal Generative Models, we introduce a recursive modular design into normalizing flows to improve transformation interpretability and estimation accuracy. Experiments on MNIST, FashionMNIST, CIFAR-10, and geophysical data demonstrate that the Fractal Flow achieves latent clustering, controllable generation, and superior estimation accuracy.