Weak-Driven Learning: How Weak Agents make Strong Agents Stronger

๐Ÿ“… 2026-02-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the performance saturation commonly observed in large language models during post-training, where overconfidence hinders further optimization. To overcome this limitation, we propose WMSS, a novel approach that leverages historical weak checkpoints of the model as a source of effective supervision. By dynamically identifying recoverable learning gaps through entropy-based signals, WMSS introduces a weak-proxy-guided compensatory learning mechanism that transcends the conventional post-training paradigm focused solely on reinforcing target predictionsโ€”all without incurring additional inference overhead. Experimental results demonstrate that WMSS significantly enhances model performance on mathematical reasoning and code generation tasks, effectively mitigating diminishing returns and enabling sustained improvement at no extra computational cost.

Technology Category

Application Category

๐Ÿ“ Abstract
As post-training optimization becomes central to improving large language models, we observe a persistent saturation bottleneck: once models grow highly confident, further training yields diminishing returns. While existing methods continue to reinforce target predictions, we find that informative supervision signals remain latent in models'own historical weak states. Motivated by this observation, we propose WMSS (Weak Agents Can Make Strong Agents Stronger), a post-training paradigm that leverages weak checkpoints to guide continued optimization. By identifying recoverable learning gaps via entropy dynamics and reinforcing them through compensatory learning, WMSS enables strong agents to improve beyond conventional post-training saturation. Experiments on mathematical reasoning and code generation datasets show that agents trained with our approach achieve effective performance improvements, while incurring zero additional inference cost.
Problem

Research questions and friction points this paper is trying to address.

post-training optimization
performance saturation
large language models
diminishing returns
learning bottleneck
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weak-Driven Learning
Post-Training Optimization
Entropy Dynamics
Compensatory Learning
Model Checkpoints
Zehao Chen
Zehao Chen
PhD, Yale University
Porous MediaFluid DynamicsPolymerHydrogel
G
Gongxun Li
Beihang University
T
Tianxiang Ai
China Telecom eSurfing Cloud
Y
Yifei Li
Beihang University
Z
Zixuan Huang
Beihang University
Wang Zhou
Wang Zhou
Sun Yat-Sen University
F
Fuzhen Zhuang
Beihang University
X
Xianglong Liu
Beihang University
Jianxin Li
Jianxin Li
School of Computer Science & Engineering, Beihang University
Big DataAIIntelligent Computing
D
Deqing Wang
Beihang University
Yikun Ban
Yikun Ban
Beihang University, University of Illinois Urbana-Champaign
Reinforcement LearningEnsemble Learning