๐ค AI Summary
To address the training instability and limited architectural flexibility of self-supervised Joint Embedding Predictive Architectures (JEPA) in Euclidean space, this paper proposes a novel kernel-based regularization framework. Our method formulates latent-space distribution matching in JEPA as kernelized Maximum Mean Discrepancy (MMD), thereby establishing the first scalable family of kernelized JEPAs. We derive the closed-form asymptotic limit of high-dimensional sliced MMD, enabling rigorous theoretical analysis for arbitrary kernels combined with isotropic Gaussian priors. Compared to conventional EppsโPulley regularization, our approach significantly improves training stability and downstream generalization performance. Crucially, we provide formal theoretical guarantees demonstrating its strict superiority in terms of both convergence behavior and statistical consistency. The framework preserves JEPAโs predictive structure while enhancing robustness and expressivity through principled kernel embedding, offering a unified foundation for stable, theoretically grounded representation learning.
๐ Abstract
Recent breakthroughs in self-supervised Joint-Embedding Predictive Architectures (JEPAs) have established that regularizing Euclidean representations toward isotropic Gaussian priors yields provable gains in training stability and downstream generalization. We introduce a new, flexible family of KerJEPAs, self-supervised learning algorithms with kernel-based regularizers. One instance of this family corresponds to the recently-introduced LeJEPA Epps-Pulley regularizer which approximates a sliced maximum mean discrepancy (MMD) with a Gaussian prior and Gaussian kernel. By expanding the class of viable kernels and priors and computing the closed-form high-dimensional limit of sliced MMDs, we develop alternative KerJEPAs with a number of favorable properties including improved training stability and design flexibility.