🤖 AI Summary
This work investigates whether noisy labels generated by a weak teacher model can enable a stronger student model to surpass the teacher’s test error and achieve improved error scaling with respect to sample size. Using a two-stage teacher-student training framework modeled via random feature ridge regression, the authors derive a deterministic equivalent for the student’s excess test error. Theoretical analysis reveals that, under both bias-dominated and variance-dominated regimes, the student can overcome the teacher’s suboptimal scaling law and even attain the minimax-optimal rate—even when the teacher’s error does not decay with increasing sample size. This study provides the first precise theoretical conditions under which the student provably outperforms the teacher, demonstrating the potential to achieve superior or even optimal error scaling through weak supervision.
📝 Abstract
It is increasingly common in machine learning to use learned models to label data and then employ such data to train more capable models. The phenomenon of weak-to-strong generalization exemplifies the advantage of this two-stage procedure: a strong student is trained on imperfect labels obtained from a weak teacher, and yet the strong student outperforms the weak teacher. In this paper, we show that the potential improvement is substantial, in the sense that it affects the scaling law followed by the test error. Specifically, we consider students and teachers trained via random feature ridge regression (RFRR). Our main technical contribution is to derive a deterministic equivalent for the excess test error of the student trained on labels obtained via the teacher. Via this deterministic equivalent, we then identify regimes in which the scaling law of the student improves upon that of the teacher, unveiling that the improvement can be achieved both in bias-dominated and variance-dominated settings. Strikingly, the student may attain the minimax optimal rate regardless of the scaling law of the teacher -- in fact, when the test error of the teacher does not even decay with the sample size.