🤖 AI Summary
This study investigates the impact of training order and mixing strategies on cross-domain transfer of reasoning capabilities in multi-domain reinforcement learning. Building upon the Group Relative Policy Optimization (GRPO) framework, we systematically compare sequential versus mixed training across four distinct reasoning domains: mathematics, science, logic, and puzzles. Our findings reveal, for the first time, significant asymmetry, order sensitivity, and policy dependence in GRPO’s multi-domain training dynamics: mathematical tasks benefit from other domains with up to a 25% accuracy gain, whereas logic and puzzle tasks exhibit negligible transfer. Moreover, directional sequences such as math→science substantially outperform their reverse counterparts. These results underscore the necessity of designing domain-aware and order-sensitive training strategies to optimize transfer performance in multi-domain reinforcement learning settings.
📝 Abstract
Group Relative Policy Optimization (GRPO) has become a key technique for improving reasoning abilities in large language models, yet its behavior under different domain sequencing strategies is poorly understood. In particular, the impact of sequential (one domain at a time) versus mixed-domain (multiple domain at a time) training in GRPO has not been systematically studied. We provide the first systematic analysis of training-order effects across math, science, logic, and puzzle reasoning tasks. We found (1) single-domain generalization is highly asymmetric: training on other domains improves math reasoning by approximately 25\% accuracy, while yielding negligible transfer to logic and puzzle; (2) cross-domain interactions are highly order-dependent: training in the order math$\rightarrow$science achieves 83\% / 41\% accuracy on math / science, while reversing the order to science$\rightarrow$math degrades performance to 77\% / 25\%; (3) no single strategy is universally optimal in multi-domain training: sequential training favors math (up to 84\%), mixed training favors science and logic, and poor ordering can incur large performance gaps (from 70\% to 56\%). Overall, our findings demonstrate that GRPO under multi-domain settings exhibits pronounced asymmetry, order sensitivity, and strategy dependence, highlighting the necessity of domain-aware and order-aware training design.