Interpretable Kernel Representation Learning at Scale: A Unified Framework Utilizing Nyström Approximation

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Kernel methods offer strong theoretical guarantees but suffer from prohibitive time and memory complexity, severely limiting their applicability to large-scale representation learning—particularly for unsupervised and self-supervised settings, where scalable kernel-based frameworks remain absent. To address this, we propose KREPES: the first unified, scalable kernel representation learning framework. Built upon the Nyström approximation and large-scale optimization techniques, KREPES supports diverse unsupervised and self-supervised loss functions while preserving the theoretical interpretability inherent to kernel methods. Crucially, it achieves substantial improvements in computational efficiency and memory scalability. Extensive experiments demonstrate that KREPES enables efficient training on large-scale image and tabular datasets, and yields representations that are not only competitive in performance but also significantly more interpretable than those produced by deep neural networks. KREPES thus establishes a novel paradigm for large-scale nonparametric representation learning.

Technology Category

Application Category

📝 Abstract
Kernel methods provide a theoretically grounded framework for non-linear and non-parametric learning, with strong analytic foundations and statistical guarantees. Yet, their scalability has long been limited by prohibitive time and memory costs. While progress has been made in scaling kernel regression, no framework exists for scalable kernel-based representation learning, restricting their use in the era of foundation models where representations are learned from massive unlabeled data. We introduce KREPES -- a unified, scalable framework for kernel-based representation learning via Nyström approximation. KREPES accommodates a wide range of unsupervised and self-supervised losses, and experiments on large image and tabular datasets demonstrate its efficiency. Crucially, KREPES enables principled interpretability of the learned representations, an immediate benefit over deep models, which we substantiate through dedicated analysis.
Problem

Research questions and friction points this paper is trying to address.

Scaling kernel methods for representation learning
Overcoming computational limitations of kernel approaches
Providing interpretable representations for foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified scalable framework for kernel representation learning
Uses Nyström approximation to enable efficient learning
Provides interpretable representations for unsupervised self-supervised tasks
🔎 Similar Papers
No similar papers found.
M
Maedeh Zarvandi
School of Computation, Information and Technology, Technical University of Munich, 85748 Garching bei München, Germany
M
Michael Timothy
School of Computation, Information and Technology, Technical University of Munich, 85748 Garching bei München, Germany
T
Theresa Wasserer
School of Computation, Information and Technology, Technical University of Munich, 85748 Garching bei München, Germany
Debarghya Ghoshdastidar
Debarghya Ghoshdastidar
Technical University of Munich
Machine learningStatisticsNetwork analysis