Eigenfunction Extraction for Ordered Representation Learning

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing representation learning methods—both contrastive and non-contrastive—can only linearly approximate the top eigenfunctions of the context kernel, failing to yield exact, ordered spectral decomposition; this limitation hinders faithful feature importance modeling and adaptive dimension selection. To address this, we propose the first general-purpose framework that explicitly computes identifiable and spectrally ordered eigenfunctions of the context kernel. Our approach unifies low-rank approximation and Rayleigh quotient optimization into modular, kernel-compatible, and scalable solver components. We validate the framework on synthetic kernels and real-world image data: the estimated eigenvalues serve as robust, interpretable feature scores, enabling efficient and principled feature selection. The method achieves a favorable trade-off between accuracy and computational efficiency, offering a novel paradigm for explainable, adaptive representation learning in large-scale settings.

Technology Category

Application Category

📝 Abstract
Recent advances in representation learning reveal that widely used objectives, such as contrastive and non-contrastive, implicitly perform spectral decomposition of a contextual kernel, induced by the relationship between inputs and their contexts. Yet, these methods recover only the linear span of top eigenfunctions of the kernel, whereas exact spectral decomposition is essential for understanding feature ordering and importance. In this work, we propose a general framework to extract ordered and identifiable eigenfunctions, based on modular building blocks designed to satisfy key desiderata, including compatibility with the contextual kernel and scalability to modern settings. We then show how two main methodological paradigms, low-rank approximation and Rayleigh quotient optimization, align with this framework for eigenfunction extraction. Finally, we validate our approach on synthetic kernels and demonstrate on real-world image datasets that the recovered eigenvalues act as effective importance scores for feature selection, enabling principled efficiency-accuracy tradeoffs via adaptive-dimensional representations.
Problem

Research questions and friction points this paper is trying to address.

Extracting ordered eigenfunctions from contextual kernels
Overcoming limitations of existing spectral decomposition methods
Enabling principled feature selection via eigenvalue importance scores
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extracts ordered identifiable eigenfunctions via modular framework
Aligns low-rank approximation with Rayleigh quotient optimization
Uses eigenvalues as importance scores for feature selection
🔎 Similar Papers
No similar papers found.
B
Burak Varici
Machine Learning Department, Carnegie Mellon University
C
Che-Ping Tsai
Machine Learning Department, Carnegie Mellon University
R
Ritabrata Ray
Machine Learning Department, Carnegie Mellon University
Nicholas M. Boffi
Nicholas M. Boffi
CMU
machine learningapplied mathematicsartificial intelligence
Pradeep Ravikumar
Pradeep Ravikumar
Professor, School of Computer Science, Carnegie Mellon University
Machine LearningStatistical Machine Learning