🤖 AI Summary
This work addresses the limitation of existing reinforcement learning methods that rely on a single policy network, which often leads to domination by simpler tasks and hinders learning on more complex ones. To overcome this, the authors propose a Phase-Aware Mixture of Experts (PA-MoE) architecture that employs a lightweight phase router to automatically infer latent phase boundaries directly from the reinforcement learning objective. By consistently assigning the same expert to handle an entire temporal phase—rather than routing at the token level as in conventional MoE approaches—the method avoids phase fragmentation and enables dynamic, structured specialization of experts without requiring predefined phase categories. Experimental results demonstrate that PA-MoE significantly improves performance on complex tasks and enhances each expert’s proficiency within its assigned phase.
📝 Abstract
Reinforcement learning (RL) has equipped LLM agents with a strong ability to solve complex tasks. However, existing RL methods normally use a \emph{single} policy network, causing \emph{simplicity bias} where simple tasks occupy most parameters and dominate gradient updates, leaving insufficient capacity for complex tasks. A plausible remedy could be employing the Mixture-of-Experts (MoE) architecture in the policy network, as MoE allows different parameters (experts) to specialize in different tasks, preventing simple tasks from dominating all parameters. However, a key limitation of traditional MoE is its token-level routing, where the router assigns each token to specialized experts, which fragments phase-consistent patterns into scattered expert assignments and thus undermines expert specialization. In this paper, we propose \textbf{Phase-Aware Mixture of Experts (PA-MoE)}. It first features a lightweight \emph{phase router} that learns latent phase boundaries directly from the RL objective without pre-defining phase categories. Then, the phase router allocates temporally consistent assignments to the same expert, allowing experts to preserve phase-specific expertise. Experimental results demonstrate the effectiveness of our proposed PA-MoE.