๐ค AI Summary
This paper identifies a pervasive gradient magnitude imbalance in multi-task large language model (LLM) reinforcement learning post-training: gradients from different tasks exhibit significantly divergent norms, causing systematic optimization bias toward high-gradient tasksโdespite empirical evidence showing no positive correlation between gradient magnitude and actual learning gain (i.e., performance improvement), and the absence of explanatory power from standard signals such as reward or advantage estimates. The root cause lies in intrinsic task heterogeneity.
Method: We propose a principled gradient-level correction paradigm, moving beyond heuristic approaches based on data mixing ratios or reward weighting. Through systematic gradient distribution analysis and ablation studies, we rigorously characterize the phenomenon.
Contribution/Results: We validate the robustness of gradient imbalance across diverse task combinations and RLHF configurations. Our findings provide critical theoretical insight into multi-task LLM alignment and establish a scalable, gradient-aware optimization direction for future work.
๐ Abstract
Multi-task post-training of large language models (LLMs) is typically performed by mixing datasets from different tasks and optimizing them jointly. This approach implicitly assumes that all tasks contribute gradients of similar magnitudes; when this assumption fails, optimization becomes biased toward large-gradient tasks. In this paper, however, we show that this assumption fails in RL post-training: certain tasks produce significantly larger gradients, thus biasing updates toward those tasks. Such gradient imbalance would be justified only if larger gradients implied larger learning gains on the tasks (i.e., larger performance improvements) -- but we find this is not true. Large-gradient tasks can achieve similar or even much lower learning gains than small-gradient ones. Further analyses reveal that these gradient imbalances cannot be explained by typical training statistics such as training rewards or advantages, suggesting that they arise from the inherent differences between tasks. This cautions against naive dataset mixing and calls for future work on principled gradient-level corrections for LLMs.