🤖 AI Summary
Conventional dimensionality reduction and clustering methods for high-dimensional data are often decoupled, hindering effective modeling of multi-scale structural patterns. Method: This paper proposes a unified framework based on distributional simplification, which— for the first time—embeds both tasks within the Gromov–Wasserstein (GW) optimal transport geometry. By modeling the intrinsic metric structure via GW projection, the framework jointly learns low-dimensional embeddings and multi-scale prototypes through a single-objective optimization. It integrates differentiable GW distance, distributional projection, and end-to-end learning, theoretically establishing the intrinsic equivalence between dimensionality reduction and clustering in GW space. Contribution/Results: Evaluated on multi-source image and genomic datasets, the method simultaneously enhances interpretability of dimensionality reduction and accuracy of clustering. It successfully identifies cross-scale, semantically coherent low-dimensional prototypes, demonstrating both effectiveness and generalizability of joint multi-scale structural modeling.
📝 Abstract
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets. Traditionally, this involves using dimensionality reduction (DR) methods to project data onto lower-dimensional spaces or organizing points into meaningful clusters (clustering). In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem. This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem. We empirically demonstrate its relevance to the identification of low-dimensional prototypes representing data at different scales, across multiple image and genomic datasets.