ProCeedRL: Process Critic with Exploratory Demonstration Reinforcement Learning for LLM Agentic Reasoning

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of exploration failure in multi-turn agent tasks, where prolonged interactions and environmental stochasticity often lead to misleading contextual cues and compounding decision biases due to erroneous actions. To mitigate this, the paper introduces ProCeedRL, a novel framework that incorporates a process-level critic to monitor interaction trajectories in real time and integrates a reflection-based exploratory demonstration mechanism. This approach shifts exploration from passive action selection to active intervention, effectively interrupting the feedback loop of error accumulation. Empirical results demonstrate that ProCeedRL substantially enhances the exploration efficiency and reasoning capabilities of large language model agents in complex deep-search and embodied tasks, surpassing current state-of-the-art exploration performance.
📝 Abstract
Reinforcement Learning (RL) significantly enhances the reasoning abilities of large language models (LLMs), yet applying it to multi-turn agentic tasks remains challenging due to the long-horizon nature of interactions and the stochasticity of environmental feedback. We identify a structural failure mode in agentic exploration: suboptimal actions elicit noisy observations into misleading contexts, which further weaken subsequent decision-making, making recovery increasingly difficult. This cumulative feedback loop of errors renders standard exploration strategies ineffective and susceptible to the model's reasoning and the environment's randomness. To mitigate this issue, we propose ProCeedRL: Process Critic with Explorative Demonstration RL, shifting exploration from passive selection to active intervention. ProCeedRL employs a process-level critic to monitor interactions in real time, incorporating reflection-based demonstrations to guide agents in stopping the accumulation of errors. We find that this approach significantly exceeds the model's saturated exploration performance, demonstrating substantial exploratory benefits. By learning from exploratory demonstrations and on-policy samples, ProCeedRL significantly improves exploration efficiency and achieves superior performance on complex deep search and embodied tasks.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Large Language Models
Agentic Reasoning
Exploration
Long-horizon Tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Process Critic
Exploratory Demonstration
Reinforcement Learning
Agentic Reasoning
Error Accumulation Mitigation
🔎 Similar Papers
No similar papers found.
J
Jingyue Gao
Institute for Interdisciplinary Information Sciences, Tsinghua University
Yanjiang Guo
Yanjiang Guo
Tsinghua University
Embodied AIGenerative Model
X
Xiaoshuai Chen
Independent Researcher
Jianyu Chen
Jianyu Chen
Assistant Professor, Tsinghua University
AIRobotics