Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the uncontrollability and irreproducibility of reasoning behaviors in Large Reasoning Models (LRMs), this paper proposes a systematic meta-capability alignment framework that formally models deductive, inductive, and abductive reasoning as self-verifiable tasks—marking the first such formalization. Methodologically, it introduces a three-stage pipeline: (1) individual capability alignment, (2) parameter-space fusion, and (3) domain-specific reinforcement learning (RL), integrated with automated task generation and chain-of-thought self-verification evaluation. On mathematical, coding, and scientific benchmarks, the approach outperforms instruction-tuning baselines by over 10% on average; incorporating domain-adapted RL yields an additional 2% gain. The framework significantly improves reasoning consistency, controllability, and scalability. Its core contributions lie in the formal meta-capability modeling and the design of verifiable alignment mechanisms—enabling rigorous, interpretable, and generalizable reasoning enhancement.

Technology Category

Application Category

📝 Abstract
Large reasoning models (LRMs) already possess a latent capacity for long chain-of-thought reasoning. Prior work has shown that outcome-based reinforcement learning (RL) can incidentally elicit advanced reasoning behaviors such as self-correction, backtracking, and verification phenomena often referred to as the model's"aha moment". However, the timing and consistency of these emergent behaviors remain unpredictable and uncontrollable, limiting the scalability and reliability of LRMs' reasoning capabilities. To address these limitations, we move beyond reliance on prompts and coincidental"aha moments". Instead, we explicitly align models with three meta-abilities: deduction, induction, and abduction, using automatically generated, self-verifiable tasks. Our three stage-pipeline individual alignment, parameter-space merging, and domain-specific reinforcement learning, boosting performance by over 10% relative to instruction-tuned baselines. Furthermore, domain-specific RL from the aligned checkpoint yields an additional 2% average gain in the performance ceiling across math, coding, and science benchmarks, demonstrating that explicit meta-ability alignment offers a scalable and dependable foundation for reasoning. Code is available at: https://github.com/zhiyuanhubj/Meta-Ability-Alignment
Problem

Research questions and friction points this paper is trying to address.

Unpredictable timing of emergent reasoning behaviors in LRMs
Inconsistent self-correction and verification in large reasoning models
Lack of scalable foundation for reliable meta-abilities alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Align models with deduction, induction, abduction
Use self-verifiable tasks for alignment
Three-stage pipeline boosts performance significantly
🔎 Similar Papers
No similar papers found.