🤖 AI Summary
To address the challenge of jointly achieving interpretability and end-to-end learning in nonlinear dimensionality reduction for high-dimensional data, this paper proposes a differentiable and trainable Diffusion Map (DMAP) encoder, integrated into a multilayer sequential neural network to jointly optimize geometric structure preservation and deep representation learning. The key contribution is the first explicit DMAP layer supporting backpropagation—uniquely combining manifold-geometric interpretability with gradient-based optimization. Our method unifies autoencoder architecture with sequential modeling, enabling end-to-end training while preserving intrinsic manifold geometry. Experiments on multiple standard manifold benchmarks demonstrate that the learned low-dimensional embeddings significantly improve topological fidelity and reconstruction accuracy, enhance generalization performance on downstream tasks, and retain clear geometric semantics for human interpretation.
📝 Abstract
In this work, we explore various modifications to diffusion maps (DMAP), including their incorporation into a layered sequential neural network model trained with gradient descent. The result is a sequential neural network that inherits the interpretability of diffusion maps.