NEMOTRON-CROSSTHINK: Scaling Self-Learning beyond Math Reasoning

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit weak generalization beyond mathematical reasoning, suffer from sparse and unreliable reward signals, and face significant task heterogeneity—especially in open-domain domains such as humanities and social sciences. Method: This paper proposes the first verifiable reinforcement learning (RL) training framework for open-domain reasoning. It innovatively integrates cross-disciplinary, multi-source data; imposes structured output templates to constrain generation; and introduces an answer-verifiability filtering mechanism to enable transferable reward modeling from mathematics to non-mathematical domains. A dynamic mixture of real and synthetic question-answer pairs further enhances robustness. Contribution/Results: The framework significantly improves generalization: mathematical reasoning accuracy increases by 27.5–30.1%, while non-mathematical reasoning (MMLU-PRO/GPQA-DIAMOND) improves by 3.8–15.1%. Moreover, it reduces token consumption for correct answers by 28%, demonstrating both efficacy and efficiency.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown strong reasoning capabilities, particularly when enhanced through Reinforcement Learning (RL). While prior work has successfully applied RL to mathematical reasoning -- where rules and correctness are well-defined -- generalizing these methods to broader reasoning domains remains challenging due to limited data, the lack of verifiable reward structures, and diverse task requirements. In this work, we propose NEMOTRON-CROSSTHINK, a framework that systematically incorporates multi-domain corpora, including both synthetic and real-world question-answer pairs, into RL training to improve generalization across diverse reasoning tasks. NEMOTRON-CROSSTHINK addresses key challenges by (1) incorporating data from varied sources spanning STEM, humanities, social sciences, etc.; (2) applying structured templates (e.g., multiple-choice and open-ended) to control answer-space complexity; (3) filtering for verifiable answers; and (4) optimizing data blending strategies that utilizes data from multiple sources effectively. Our approach enables scalable and verifiable reward modeling beyond mathematics and demonstrates improved accuracies on both math (MATH-500: +30.1%, AMC23:+27.5%) and non-math reasoning benchmarks (MMLU-PRO: +12.8%, GPQA-DIAMOND: +11.3%, AGIEVAL: +15.1%, SUPERGPQA: +3.8%). Moreover, NEMOTRON-CROSSTHINK exhibits significantly improved response efficiency -- using 28% fewer tokens for correct answers -- highlighting more focused and effective reasoning. Through NEMOTRON-CROSSTHINK, we demonstrate that integrating multi-domain, multi-format data in RL leads to more accurate, efficient, and generalizable LLMs.
Problem

Research questions and friction points this paper is trying to address.

Generalizing RL methods to diverse reasoning domains
Addressing limited data and verifiable reward challenges
Improving accuracy and efficiency in multi-domain reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates multi-domain corpora for RL training
Uses structured templates to control answer complexity
Optimizes data blending from multiple sources
🔎 Similar Papers
No similar papers found.