From Core to Detail: Unsupervised Disentanglement with Entropy-Ordered Flows

πŸ“… 2026-02-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge in unsupervised representation learning of simultaneously achieving semantic interpretability and cross-run stability. The authors propose Entropy-Ordered Flows (EOFlows), a novel framework that, for the first time, integrates explanatory entropy into normalizing flows. By sorting latent dimensions according to their explanatory entropy after training, EOFlows adaptively disentangles core semantic factors from fine-grained noise, enabling variable-rate compression and disentanglement without pre-specifying the latent dimensionality. The method combines likelihood-based training, local Jacobian regularization, and noise augmentation, synergistically integrating independent mechanism analysis, principal component flows, and manifold entropy measures. Evaluated on CelebA, EOFlows successfully extracts highly interpretable semantic features, significantly improving both compression fidelity and denoising performance.

Technology Category

Application Category

πŸ“ Abstract
Learning unsupervised representations that are both semantically meaningful and stable across runs remains a central challenge in modern representation learning. We introduce entropy-ordered flows (EOFlows), a normalizing-flow framework that orders latent dimensions by their explained entropy, analogously to PCA's explained variance. This ordering enables adaptive injective flows: after training, one may retain only the top C latent variables to form a compact core representation while the remaining variables capture fine-grained detail and noise, with C chosen flexibly at inference time rather than fixed during training. EOFlows build on insights from Independent Mechanism Analysis, Principal Component Flows and Manifold Entropic Metrics. We combine likelihood-based training with local Jacobian regularization and noise augmentation into a method that scales well to high-dimensional data such as images. Experiments on the CelebA dataset show that our method uncovers a rich set of semantically interpretable features, allowing for high compression and strong denoising.
Problem

Research questions and friction points this paper is trying to address.

unsupervised representation learning
semantic interpretability
representation stability
disentanglement
normalizing flows
Innovation

Methods, ideas, or system contributions that make the work stand out.

entropy-ordered flows
unsupervised disentanglement
normalizing flows
semantic representation
adaptive injective flows
πŸ”Ž Similar Papers
No similar papers found.