Compressive Meta-Learning

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of conventional compressed learning—namely, random and data-agnostic encoding/decoding procedures that ignore intrinsic data structure—this paper introduces meta-learning into the compressed learning framework for the first time. We propose a learnable compact representation method that jointly optimizes a randomized nonlinear feature encoder and a parametric decoder via neural networks, enabling efficient and privacy-preserving inference of underlying data distribution parameters. Our approach employs end-to-end meta-training augmented with database-level summary representations. It achieves substantial improvements in convergence speed and accuracy across diverse downstream tasks—including PCA, ridge regression, k-means clustering, and autoencoding. Empirical evaluations demonstrate that the proposed framework outperforms existing methods in computational efficiency, scalability, and privacy preservation. By unifying theoretical rigor with practical deployability, this work establishes a novel paradigm for large-scale learning.

Technology Category

Application Category

📝 Abstract
The rapid expansion in the size of new datasets has created a need for fast and efficient parameter-learning techniques. Compressive learning is a framework that enables efficient processing by using random, non-linear features to project large-scale databases onto compact, information-preserving representations whose dimensionality is independent of the number of samples and can be easily stored, transferred, and processed. These database-level summaries are then used to decode parameters of interest from the underlying data distribution without requiring access to the original samples, offering an efficient and privacy-friendly learning framework. However, both the encoding and decoding techniques are typically randomized and data-independent, failing to exploit the underlying structure of the data. In this work, we propose a framework that meta-learns both the encoding and decoding stages of compressive learning methods by using neural networks that provide faster and more accurate systems than the current state-of-the-art approaches. To demonstrate the potential of the presented Compressive Meta-Learning framework, we explore multiple applications -- including neural network-based compressive PCA, compressive ridge regression, compressive k-means, and autoencoders.
Problem

Research questions and friction points this paper is trying to address.

Efficient parameter-learning for large datasets
Improving randomized encoding and decoding techniques
Meta-learning for faster and more accurate systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learns encoding and decoding stages
Uses neural networks for efficiency
Applies to compressive PCA, regression, k-means
🔎 Similar Papers
No similar papers found.