Divide-Fuse-Conquer: Eliciting"Aha Moments"in Multi-Scenario Games

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak generalization and training instability of reinforcement learning (RL) policies in multi-scenario text games, this paper proposes a three-stage “Divide–Integrate–Conquer” framework. First, 18 TextArena games are heuristically grouped by rule complexity and difficulty to enable specialized RL fine-tuning. Second, model collaboration is achieved via weighted parameter fusion and multi-stage policy distillation. Third, a unified policy converges across heterogeneous environments. End-to-end experiments based on Qwen2.5-32B-Align demonstrate that our method achieves 7 wins and 4 draws—matching Claude 3.5’s performance and significantly outperforming single-model baselines. To the best of our knowledge, this is the first work to achieve scalable, stable, and highly generalizable reasoning for large language models across diverse text-based games.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have been observed to suddenly exhibit advanced reasoning abilities during reinforcement learning (RL), resembling an ``aha moment'' triggered by simple outcome-based rewards. While RL has proven effective in eliciting such breakthroughs in tasks involving mathematics, coding, and vision, it faces significant challenges in multi-scenario games. The diversity of game rules, interaction modes, and environmental complexities often leads to policies that perform well in one scenario but fail to generalize to others. Simply combining multiple scenarios during training introduces additional challenges, such as training instability and poor performance. To overcome these challenges, we propose Divide-Fuse-Conquer, a framework designed to enhance generalization in multi-scenario RL. This approach starts by heuristically grouping games based on characteristics such as rules and difficulties. Specialized models are then trained for each group to excel at games in the group is what we refer to as the divide step. Next, we fuse model parameters from different groups as a new model, and continue training it for multiple groups, until the scenarios in all groups are conquered. Experiments across 18 TextArena games show that Qwen2.5-32B-Align trained with the Divide-Fuse-Conquer strategy reaches a performance level comparable to Claude3.5, achieving 7 wins and 4 draws. We hope our approach can inspire future research on using reinforcement learning to improve the generalization of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Enhancing generalization in multi-scenario reinforcement learning games
Overcoming training instability in diverse game rule environments
Improving LLM performance across varied interaction modes and complexities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Heuristically groups games by rules and difficulties
Trains specialized models for each game group
Fuses model parameters to enhance generalization
🔎 Similar Papers
No similar papers found.
X
Xiaoqing Zhang
Gaoling School of Artificial Intelligence, Renmin University of China
H
Huabin Zheng
Moonshot AI
Ang Lv
Ang Lv
Renmin University of China
Language Model
Y
Yuhan Liu
Gaoling School of Artificial Intelligence, Renmin University of China
Zirui Song
Zirui Song
PhD student in MBZUAI
NLP
Flood Sung
Flood Sung
Moonshot AI
Foundation ModelsLLM/VLMAgentReinforcement LearningMeta Learning
Xiuying Chen
Xiuying Chen
MBZUAI
Trustworthy NLPHuman-Centered NLPComputational Social Science
R
Rui Yan
Gaoling School of Artificial Intelligence, Renmin University of China