🤖 AI Summary
This work addresses the “weak-to-strong” (W2S) generalization problem: how weakly supervised models—trained solely on human-level data—can effectively guide stronger language models to solve superhuman-complex tasks. We propose an iterative training framework based on token-level ensemble: multiple weak experts collaboratively vote at the token level to identify errors made by preceding models, and model correction is performed on out-of-distribution (OOD) data, with task difficulty serving as the OOD dimension. The method integrates weak-expert ensembling, iterative error analysis, and supervised fine-tuning. Experiments show performance gains of +4.0% and +3.2% on in-distribution (ID) data for weak experts and student models, respectively, and +6.0% and +2.28% on OOD data—substantially improving W2S generalization. Our core contribution is the first token-level weak-expert ensembling mechanism, overcoming the limitations of conventional sequence-level supervision.
📝 Abstract
With Large Language Models (LLMs) rapidly approaching and potentially surpassing human-level performance, it has become imperative to develop approaches capable of effectively supervising and enhancing these powerful models using smaller, human-level models exposed to only human-level data. We address this critical weak-to-strong (W2S) generalization challenge by proposing a novel method aimed at improving weak experts, by training on the same limited human-level data, enabling them to generalize to complex, super-human-level tasks. Our approach, called extbf{EnsemW2S}, employs a token-level ensemble strategy that iteratively combines multiple weak experts, systematically addressing the shortcomings identified in preceding iterations. By continuously refining these weak models, we significantly enhance their collective ability to supervise stronger student models. We extensively evaluate the generalization performance of both the ensemble of weak experts and the subsequent strong student model across in-distribution (ID) and out-of-distribution (OOD) datasets. For OOD, we specifically introduce question difficulty as an additional dimension for defining distributional shifts. Our empirical results demonstrate notable improvements, achieving 4%, and 3.2% improvements on ID datasets and, upto 6% and 2.28% on OOD datasets for experts and student models respectively, underscoring the effectiveness of our proposed method in advancing W2S generalization.