Advancing General-Purpose Reasoning Models with Modular Gradient Surgery

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation commonly observed in multi-domain reinforcement learning for training general-purpose large reasoning models, which stems from cross-domain gradient conflicts. The study presents Modular Gradient Surgery (MGS), a novel approach that identifies and mitigates gradient interference at the Transformer module level through module-wise gradient modulation, thereby enhancing cross-domain generalization. Experiments on Llama- and Qwen-based architectures demonstrate that MGS consistently outperforms standard multi-task reinforcement learning, yielding average improvements of 4.3 points (16.6%) and 4.5 points (11.1%) across mathematical reasoning, general dialogue, and instruction-following tasks. Moreover, the gains achieved by MGS remain stable throughout extended training regimes.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has played a central role in recent advances in large reasoning models (LRMs), yielding strong gains in verifiable and open-ended reasoning. However, training a single general-purpose LRM across diverse domains remains challenging due to pronounced domain heterogeneity. Through a systematic study of two widely used strategies, Sequential RL and Mixed RL, we find that both incur substantial cross-domain interference at the behavioral and gradient levels, resulting in limited overall gains. To address these challenges, we introduce **M**odular **G**radient **S**urgery (**MGS**), which resolves gradient conflicts at the module level within the transformer. When applied to Llama and Qwen models, MGS achieves average improvements of 4.3 (16.6\%) and 4.5 (11.1\%) points, respectively, over standard multi-task RL across three representative domains (math, general chat, and instruction following). Further analysis demonstrates that MGS remains effective under prolonged training. Overall, our study clarifies the sources of interference in multi-domain RL and presents an effective solution for training general-purpose LRMs.
Problem

Research questions and friction points this paper is trying to address.

large reasoning models
domain heterogeneity
cross-domain interference
multi-task reinforcement learning
general-purpose reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular Gradient Surgery
multi-domain reinforcement learning
large reasoning models
gradient conflict
cross-domain interference
🔎 Similar Papers
No similar papers found.
Min Cai
Min Cai
PhD Student, University of Alberta
Natural Language ProcessingReinforcement Learning
Y
Yu Liang
Baidu Inc.
L
Longzheng Wang
Baidu Inc.
Y
Yan Wang
Baidu Inc.
Y
Yueyang Zhang
Baidu Inc.
Long Xia
Long Xia
Research Scientist, Baidu
information retrievaldata miningapplied machine learningrecommender system
Z
Zhiyuan Sun
Baidu Inc.
X
Xi Ye
University of Alberta
D
Daiting Shi
Baidu Inc.