A Law of Data Reconstruction for Random Features (and Beyond)

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work re-examines the memorization phenomenon in deep models from a data reconstruction perspective: when the number of parameters $p$ exceeds the product of input dimension $d$ and training sample size $n$ (i.e., $p > dn$), can the original training data be losslessly reconstructed from the learned model parameters? To address this, we propose a general reconstruction framework grounded in random feature analysis, which models the linear mapping between feature space and input space and employs optimization-based inversion for efficient recovery. Through rigorous theoretical analysis and extensive experiments, we establish—for the first time—$p gg dn$ as a universal threshold for exact data reconstructability. We validate this threshold across random feature models, fully connected networks, and deep residual networks, demonstrating both its theoretical soundness and empirical robustness. Our findings uncover a novel implicit data storage mechanism in large-scale models, wherein overparameterization enables faithful encoding of training inputs within model weights.

Technology Category

Application Category

📝 Abstract
Large-scale deep learning models are known to memorize parts of the training set. In machine learning theory, memorization is often framed as interpolation or label fitting, and classical results show that this can be achieved when the number of parameters $p$ in the model is larger than the number of training samples $n$. In this work, we consider memorization from the perspective of data reconstruction, demonstrating that this can be achieved when $p$ is larger than $dn$, where $d$ is the dimensionality of the data. More specifically, we show that, in the random features model, when $p gg dn$, the subspace spanned by the training samples in feature space gives sufficient information to identify the individual samples in input space. Our analysis suggests an optimization method to reconstruct the dataset from the model parameters, and we demonstrate that this method performs well on various architectures (random features, two-layer fully-connected and deep residual networks). Our results reveal a law of data reconstruction, according to which the entire training dataset can be recovered as $p$ exceeds the threshold $dn$.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing training data from model parameters
Establishing threshold for data memorization in models
Analyzing reconstruction feasibility across neural architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data reconstruction possible when parameters exceed dn
Training samples subspace identifies input space data
Optimization method recovers dataset from model parameters
🔎 Similar Papers
No similar papers found.