🤖 AI Summary
In teacher-student paradigms, weight inversion fails when model parameters vastly exceed training samples—causing student networks to overfit query inputs rather than align with teacher parameters. To address this, we propose a hidden-layer representation–aware data augmentation strategy. Unlike conventional augmentations (e.g., rotation, flipping, or noise injection), our method actively elicits diverse internal representations from the teacher network, thereby enhancing the student’s capacity for functional approximation and parametric alignment with the teacher. Experiments demonstrate successful reconstruction of teacher weights in networks whose parameter count is up to 100× the number of training data points—surpassing prior scalability limits in weight inversion. This work establishes, for the first time, the feasibility and scalability of input-output query–driven weight recovery in large-scale neural networks.
📝 Abstract
Network weights can be reverse-engineered given enough informative samples of a network's input-output function. In a teacher-student setup, this translates into collecting a dataset of the teacher mapping -- querying the teacher -- and fitting a student to imitate such mapping. A sensible choice of queries is the dataset the teacher is trained on. But current methods fail when the teacher parameters are more numerous than the training data, because the student overfits to the queries instead of aligning its parameters to the teacher. In this work, we explore augmentation techniques to best sample the input-output mapping of a teacher network, with the goal of eliciting a rich set of representations from the teacher hidden layers. We discover that standard augmentations such as rotation, flipping, and adding noise, bring little to no improvement to the identification problem. We design new data augmentation techniques tailored to better sample the representational space of the network's hidden layers. With our augmentations we extend the state-of-the-art range of recoverable network sizes. To test their scalability, we show that we can recover networks of up to 100 times more parameters than training data-points.