🤖 AI Summary
Aligning low-dimensional manifolds in neural implicit spaces—specifically, measuring intrinsic similarity, establishing cross-space point correspondences, and enabling effective representation transfer—remains challenging.
Method: This paper introduces the Functional Maps paradigm to neural implicit space modeling for the first time, proposing a spectral-geometry-based, interpretable multi-task alignment framework. It constructs functional bases in implicit spaces via Laplacian eigen-decomposition and jointly optimizes weakly supervised correspondences to unify cross-manifold mapping in the functional domain.
Contribution/Results: The framework supports unsupervised and weakly supervised correspondence discovery, geometry-consistent similarity measurement, and cross-modal representation transfer. Empirically, it achieves significant performance gains on image stitching and cross-modal retrieval tasks, demonstrating its effectiveness and generalizability as a universal representation alignment method.
📝 Abstract
Neural models learn data representations that lie on low-dimensional manifolds, yet modeling the relation between these representational spaces is an ongoing challenge. By integrating spectral geometry principles into neural modeling, we show that this problem can be better addressed in the functional domain, mitigating complexity, while enhancing interpretability and performances on downstream tasks. To this end, we introduce a multi-purpose framework to the representation learning community, which allows to: (i) compare different spaces in an interpretable way and measure their intrinsic similarity; (ii) find correspondences between them, both in unsupervised and weakly supervised settings, and (iii) to effectively transfer representations between distinct spaces.We validate our framework on various applications, ranging from stitching to retrieval tasks, and on multiple modalities, demonstrating that Latent Functional Maps can serve as a swiss-army knife for representation alignment.