Thinner Latent Spaces: Detecting dimension and imposing invariance through autoencoder gradient constraints

📅 2024-08-28
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the joint problem of intrinsic dimension estimation and geometry-invariant embedding learning for nonlinear manifold-structured data. We propose an autoencoder framework incorporating orthogonality constraints on hidden-layer gradients. Methodologically, we establish, for the first time, a theoretical connection between gradient orthogonality in neural network latent spaces and the local tangent space dimension of the underlying manifold; this enables simultaneous intrinsic dimension estimation, learning of invertible embedding mappings, and construction of coordinate-invariant representations under local Lie group actions on low-dimensional submanifolds. Our key contribution lies in unifying gradient orthogonality with differential-geometric structure, thereby extending invariant representation learning to continuous group actions. Experiments on standard benchmarks demonstrate accurate intrinsic dimension estimation, disentangled representations, and robust group-invariant embeddings, validating both theoretical soundness and algorithmic robustness.

Technology Category

Application Category

📝 Abstract
Conformal Autoencoders are a neural network architecture that imposes orthogonality conditions between the gradients of latent variables towards achieving disentangled representations of data. In this letter we show that orthogonality relations within the latent layer of the network can be leveraged to infer the intrinsic dimensionality of nonlinear manifold data sets (locally characterized by the dimension of their tangent space), while simultaneously computing encoding and decoding (embedding) maps. We outline the relevant theory relying on differential geometry, and describe the corresponding gradient-descent optimization algorithm. The method is applied to standard data sets and we highlight its applicability, advantages, and shortcomings. In addition, we demonstrate that the same computational technology can be used to build coordinate invariance to local group actions when defined only on a (reduced) submanifold of the embedding space.
Problem

Research questions and friction points this paper is trying to address.

Detect intrinsic dimensionality of nonlinear manifold data
Impose orthogonality for disentangled latent representations
Build coordinate invariance to local group actions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conformal Autoencoders impose orthogonality for disentangled representations
Infer intrinsic dimensionality via latent layer orthogonality relations
Build coordinate invariance to local group actions
🔎 Similar Papers
No similar papers found.
G
George A. Kevrekidis
Department of Applied Mathematics and Statistics, Johns Hopkins University, Baltimore, MD, USA
Mauro Maggioni
Mauro Maggioni
Bloomberg Distinguished Professor of Mathematics, and Applied Mathematics and Statistics
Data ScienceHarmonic AnalysisSignal ProcessingStochastic Dynamical Systems
Soledad Villar
Soledad Villar
Johns Hopkins University
mathematics of datageometric deep learningcomputational harmonic analysis
Y
Y. Kevrekidis
Department of Applied Mathematics and Statistics, Johns Hopkins University, Baltimore, MD, USA