Self-Supervised Learning by Curvature Alignment

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing non-contrastive self-supervised learning methods primarily focus on statistical properties of representations, neglecting the local geometric structure of the underlying data manifold. Method: We propose CurvSSL—the first framework to explicitly incorporate local curvature into self-supervised learning. It enforces geometric consistency across augmented views by aligning discrete curvatures in the embedding space, computed either via k-nearest neighbors/cosine similarity or via normalized local Gram matrices in a reproducing kernel Hilbert space (RKHS). This is complemented by a redundancy-reduction loss inspired by Barlow Twins. The architecture employs a dual-branch encoder–projector design. Results: Evaluated with ResNet-18 on MNIST and CIFAR-10 under linear evaluation, CurvSSL matches or surpasses Barlow Twins and VICReg. These results demonstrate that explicit modeling of local manifold curvature is both effective and necessary for robust representation learning.

Technology Category

Application Category

📝 Abstract
Self-supervised learning (SSL) has recently advanced through non-contrastive methods that couple an invariance term with variance, covariance, or redundancy-reduction penalties. While such objectives shape first- and second-order statistics of the representation, they largely ignore the local geometry of the underlying data manifold. In this paper, we introduce CurvSSL, a curvature-regularized self-supervised learning framework, and its RKHS extension, kernel CurvSSL. Our approach retains a standard two-view encoder-projector architecture with a Barlow Twins-style redundancy-reduction loss on projected features, but augments it with a curvature-based regularizer. Each embedding is treated as a vertex whose $k$ nearest neighbors define a discrete curvature score via cosine interactions on the unit hypersphere; in the kernel variant, curvature is computed from a normalized local Gram matrix in an RKHS. These scores are aligned and decorrelated across augmentations by a Barlow-style loss on a curvature-derived matrix, encouraging both view invariance and consistency of local manifold bending. Experiments on MNIST and CIFAR-10 datasets with a ResNet-18 backbone show that curvature-regularized SSL yields competitive or improved linear evaluation performance compared to Barlow Twins and VICReg. Our results indicate that explicitly shaping local geometry is a simple and effective complement to purely statistical SSL regularizers.
Problem

Research questions and friction points this paper is trying to address.

SSL methods ignore local geometry of data manifold
Augments SSL with curvature-based regularizer for local manifold bending
Shapes local geometry as complement to statistical SSL regularizers
Innovation

Methods, ideas, or system contributions that make the work stand out.

CurvSSL uses curvature-based regularizer in SSL
Aligns local manifold bending via Barlow-style loss
Computes discrete curvature from nearest neighbors
Benyamin Ghojogh
Benyamin Ghojogh
AI Scientist
Machine LearningDeep LearningTheory
M
M. Hadi Sepanj
Vision and Image Processing Group, Systems Design Engineering, University of Waterloo, Ontario, Canada
Paul Fieguth
Paul Fieguth
Systems Design Engineering
Image ProcessingRandom FieldsComputer Vision