๐ค AI Summary
This work addresses the performance saturation commonly observed in large language models during post-training, where overconfidence hinders further optimization. To overcome this limitation, we propose WMSS, a novel approach that leverages historical weak checkpoints of the model as a source of effective supervision. By dynamically identifying recoverable learning gaps through entropy-based signals, WMSS introduces a weak-proxy-guided compensatory learning mechanism that transcends the conventional post-training paradigm focused solely on reinforcing target predictionsโall without incurring additional inference overhead. Experimental results demonstrate that WMSS significantly enhances model performance on mathematical reasoning and code generation tasks, effectively mitigating diminishing returns and enabling sustained improvement at no extra computational cost.
๐ Abstract
As post-training optimization becomes central to improving large language models, we observe a persistent saturation bottleneck: once models grow highly confident, further training yields diminishing returns. While existing methods continue to reinforce target predictions, we find that informative supervision signals remain latent in models'own historical weak states. Motivated by this observation, we propose WMSS (Weak Agents Can Make Strong Agents Stronger), a post-training paradigm that leverages weak checkpoints to guide continued optimization. By identifying recoverable learning gaps via entropy dynamics and reinforcing them through compensatory learning, WMSS enables strong agents to improve beyond conventional post-training saturation. Experiments on mathematical reasoning and code generation datasets show that agents trained with our approach achieve effective performance improvements, while incurring zero additional inference cost.