🤖 AI Summary
Large language models (LLMs) exhibit limited generalization of reasoning capabilities across domains. Method: We propose a two-stage curriculum-style reinforcement learning (RL) framework: Stage I performs “cold-start” RL training on aligned domains (e.g., mathematics) to bootstrap foundational reasoning; Stage II conducts multi-domain joint RL for cross-domain transfer and consolidation—without requiring domain-specific reward models. Contribution/Results: Our key innovation lies in establishing a minimal, verifiable curriculum paradigm—“math-first cold start, then multi-domain joint training”—integrated with verifiability-based reward shaping and cognitive skill analysis. Experiments on Qwen3-4B and Llama-3.1-8B demonstrate significant improvements in reasoning performance across mathematics, science, logic, and commonsense reasoning. Ablation confirms both stages are indispensable. The approach offers an efficient, scalable pathway toward enhancing general-purpose reasoning in LLMs.
📝 Abstract
Reinforcement learning (RL) can elicit strong reasoning in large language models (LLMs), yet most open efforts focus on math and code. We propose Reasoning Curriculum, a simple two-stage curriculum that first elicits reasoning skills in pretraining-aligned domains such as math, then adapts and refines these skills across other domains via joint RL. Stage 1 performs a brief cold start and then math-only RL with verifiable rewards to develop reasoning skills. Stage 2 runs joint RL on mixed-domain data to transfer and consolidate these skills. The curriculum is minimal and backbone-agnostic, requiring no specialized reward models beyond standard verifiability checks. Evaluated on Qwen3-4B and Llama-3.1-8B over a multi-domain suite, reasoning curriculum yields consistent gains. Ablations and a cognitive-skill analysis indicate that both stages are necessary and that math-first elicitation increases cognitive behaviors important for solving complex problems. Reasoning Curriculum provides a compact, easy-to-adopt recipe for general reasoning.