No More Stale Feedback: Co-Evolving Critics for Open-World Agent Learning

📅 2026-01-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency in reinforcement learning caused by stale feedback from static or offline critics during policy evolution. To overcome this limitation, the authors propose the ECHO framework, which jointly optimizes the policy and critic through a synchronized co-evolution mechanism. ECHO aligns dynamic feedback via cascaded trajectory rollback and grouped advantage estimation, mitigates learning plateaus through a saturation-aware reward shaping objective, and ensures continuous synchronization between policy and critic using a dual-track GRPO update strategy. Experimental results demonstrate that ECHO significantly enhances training stability and long-horizon task success in open-world environments.

Technology Category

Application Category

📝 Abstract
Critique-guided reinforcement learning (RL) has emerged as a powerful paradigm for training LLM agents by augmenting sparse outcome rewards with natural-language feedback. However, current methods often rely on static or offline critic models, which fail to adapt as the policy evolves. In on-policy RL, the agent's error patterns shift over time, causing stationary critics to become stale and providing feedback of diminishing utility. To address this, we introduce ECHO (Evolving Critic for Hindsight-Guided Optimization)}, a framework that jointly optimizes the policy and critic through a synchronized co-evolutionary loop. ECHO utilizes a cascaded rollout mechanism where the critic generates multiple diagnoses for an initial trajectory, followed by policy refinement to enable group-structured advantage estimation. We address the challenge of learning plateaus via a saturation-aware gain shaping objective, which rewards the critic for inducing incremental improvements in high-performing trajectories. By employing dual-track GRPO updates, ECHO ensures the critic's feedback stays synchronized with the evolving policy. Experimental results show that ECHO yields more stable training and higher long-horizon task success across open-world environments.
Problem

Research questions and friction points this paper is trying to address.

stale feedback
critic adaptation
policy evolution
open-world RL
critique-guided learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

co-evolutionary reinforcement learning
critique-guided RL
ECHO framework
saturation-aware gain shaping
dual-track GRPO
🔎 Similar Papers
No similar papers found.
Z
Zhicong Li
Gaoling School of Artificial Intelligence, Renmin University of China
L
Lingjie Jiang
Peking University
Y
Yulan Hu
Amap, Alibaba Group
Xingchen Zeng
Xingchen Zeng
Hong Kong University of Science and Technology (Guangzhou)
Multimodal LLMVisualizationHigh-dimensional Data
Yixia Li
Yixia Li
Southern University of Science and Technology
Natural Language Processing
X
Xiangwen Zhang
Amap, Alibaba Group
Guanhua Chen
Guanhua Chen
Assistant Professor, Southern University of Science and Technology
Reasoning LLMsData SynthesisMultimodal
Z
Zheng Pan
Amap, Alibaba Group
Xin Li
Xin Li
Alibaba Group
natural language processing
Y
Yong Liu
Gaoling School of Artificial Intelligence, Renmin University of China