EAPO: Enhancing Policy Optimization with On-Demand Expert Assistance

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from inefficient exploration and suboptimal policy optimization in reinforcement learning due to sparse reward signals. Method: This paper proposes Expert-Adaptive Planning Optimization (EAPO), a framework enabling the policy model to autonomously decide *when* and *how* to consult an external expert, and to internalize acquired interaction knowledge as intrinsic reasoning capability—eliminating persistent expert dependence. EAPO integrates verifiable reward signals with a multi-round dynamic consultation mechanism to enable online refinement of reasoning trajectories and enhance policy reliability. Results: On mathematical reasoning benchmarks—including AIME 2024/2025 and AIMO 2025—EAPO achieves an average +5-point gain over pure self-exploration baselines, significantly outperforming conventional expert-assisted pipelines and knowledge distillation methods. It is the first approach to realize *on-demand*, *autonomous*, and *endogenous* expert guidance in LLM-based RL.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have recently advanced in reasoning when optimized with reinforcement learning (RL) under verifiable rewards. Existing methods primarily rely on outcome-based supervision to strengthen internal LLM reasoning, often leading to inefficient exploration and sparse rewards. To mitigate this issue, we propose Expert-Assisted Policy Optimization (EAPO), a novel RL framework that enhances exploration by incorporating multi-turn interactions with external experts during training. Unlike prior methods, where policies reason in isolation, EAPO incentivizes the policy to adaptively determine when and how to consult experts, yielding richer reward signals and more reliable reasoning trajectories. External assistance ultimately internalizes expert knowledge into the policy model, amplifying the model's inherent reasoning capabilities. During evaluation, the policy model has been well-optimized to solve questions independently, producing improved reasoning paths and more accurate solutions. Experiments on mathematical reasoning benchmarks, including AIME 2024, AIME 2025, and AIMO 2025, show that EAPO consistently outperforms expert-assisted workflow, expert-distilled models, and RL baselines, with an average gain of 5 points over self-exploratory models.
Problem

Research questions and friction points this paper is trying to address.

Enhancing RL exploration with on-demand expert interactions
Overcoming sparse rewards in LLM reasoning optimization
Internalizing expert knowledge into independent policy models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses on-demand expert interactions during training
Internalizes expert knowledge into policy model
Enhances exploration with adaptive expert consultation
🔎 Similar Papers
No similar papers found.
S
Siyao Song
ByteDance BandAI
C
Cong Ma
ByteDance BandAI
Z
Zhihao Cheng
ByteDance BandAI
Shiye Lei
Shiye Lei
University of Sydney
LLM ReasoningData-centric AIDeep Learning Theory
Minghao Li
Minghao Li
Beihang University
Natural Language Processing
Y
Ying Zeng
ByteDance BandAI
H
Huaixiao Tou
ByteDance BandAI
Kai Jia
Kai Jia
MIT