🤖 AI Summary
Existing non-contrastive self-supervised learning methods primarily focus on statistical properties of representations, neglecting the local geometric structure of the underlying data manifold.
Method: We propose CurvSSL—the first framework to explicitly incorporate local curvature into self-supervised learning. It enforces geometric consistency across augmented views by aligning discrete curvatures in the embedding space, computed either via k-nearest neighbors/cosine similarity or via normalized local Gram matrices in a reproducing kernel Hilbert space (RKHS). This is complemented by a redundancy-reduction loss inspired by Barlow Twins. The architecture employs a dual-branch encoder–projector design.
Results: Evaluated with ResNet-18 on MNIST and CIFAR-10 under linear evaluation, CurvSSL matches or surpasses Barlow Twins and VICReg. These results demonstrate that explicit modeling of local manifold curvature is both effective and necessary for robust representation learning.
📝 Abstract
Self-supervised learning (SSL) has recently advanced through non-contrastive methods that couple an invariance term with variance, covariance, or redundancy-reduction penalties. While such objectives shape first- and second-order statistics of the representation, they largely ignore the local geometry of the underlying data manifold. In this paper, we introduce CurvSSL, a curvature-regularized self-supervised learning framework, and its RKHS extension, kernel CurvSSL. Our approach retains a standard two-view encoder-projector architecture with a Barlow Twins-style redundancy-reduction loss on projected features, but augments it with a curvature-based regularizer. Each embedding is treated as a vertex whose $k$ nearest neighbors define a discrete curvature score via cosine interactions on the unit hypersphere; in the kernel variant, curvature is computed from a normalized local Gram matrix in an RKHS. These scores are aligned and decorrelated across augmentations by a Barlow-style loss on a curvature-derived matrix, encouraging both view invariance and consistency of local manifold bending. Experiments on MNIST and CIFAR-10 datasets with a ResNet-18 backbone show that curvature-regularized SSL yields competitive or improved linear evaluation performance compared to Barlow Twins and VICReg. Our results indicate that explicitly shaping local geometry is a simple and effective complement to purely statistical SSL regularizers.