ARISE: Agent Reasoning with Intrinsic Skill Evolution in Hierarchical Reinforcement Learning

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that current language models struggle to effectively accumulate and reuse emergent reasoning strategies acquired during training for mathematical problem solving. To overcome this limitation, the authors propose a hierarchical reinforcement learning framework that integrates, for the first time, an evolvable skill library with hierarchical reinforcement learning. In this architecture, a high-level module maintains a dynamic skill repository through structured induction and strategy-driven retrieval, while a low-level module generates reasoning steps grounded in the retrieved skills. The two components are jointly optimized via a hierarchical reward mechanism. The approach enables cross-task strategy reuse and significantly outperforms GRPO-family algorithms and memory-augmented baselines across two base models and seven mathematical reasoning benchmarks, demonstrating particularly strong performance on out-of-distribution tasks.

Technology Category

Application Category

📝 Abstract
The dominant paradigm for improving mathematical reasoning in language models relies on Reinforcement Learning with verifiable rewards. Yet existing methods treat each problem instance in isolation without leveraging the reusable strategies that emerge and accumulate during training. To this end, we introduce ARISE (Agent Reasoning via Intrinsic Skill Evolution), a hierarchical reinforcement learning framework, in which a shared policy operates both to manage skills at high-level and to generate responses at low-level (denoted as a Skills Manager and a Worker, respectively). The Manager maintains a tiered skill library through a dedicated skill generation rollout that performs structured summarization of successful solution traces (after execution), while employing a policy-driven selection mechanism to retrieve relevant skills to condition future rollouts (before execution). A hierarchical reward design guides the co-evolution of reasoning ability and library quality. Experiments on two base models and seven benchmarks spanning both competition mathematics and Omni-MATH show that ARISE consistently outperforms GRPO-family algorithms and memory-augmented baselines, with particularly notable gains on out-of-distribution tasks. Ablation studies confirm that each component contributes to the observed improvements and that library quality and reasoning performance improve in tandem throughout training. Code is available at \href{https://github.com/Skylanding/ARISE}{https://github.com/Skylanding/ARISE}.
Problem

Research questions and friction points this paper is trying to address.

mathematical reasoning
reusable strategies
language models
reinforcement learning
skill reuse
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Reinforcement Learning
Skill Library Evolution
Mathematical Reasoning
Policy-Driven Skill Selection
Structured Summarization
🔎 Similar Papers
No similar papers found.
Y
Yu Li
Department of Electrical and Computer Engineering, George Washington University
Rui Miao
Rui Miao
Meta
NetworkingNetworked SystemsDistributed Systems
Z
Zhengling Qi
School of Business, George Washington University
Tian Lan
Tian Lan
George Washington University
Machine LearningOptimizationCyber Security