Contextual Experience Replay for Self-Improvement of Language Agents

📅 2025-06-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LLM agents face two key bottlenecks in sequential decision-making tasks such as web navigation: (1) lack of environment-specific experience, and (2) inability to continuously learn from past interactions during inference. To address these, we propose Contextual Experience Replay (CER), a training-free framework featuring a novel dynamic memory buffer that—during inference—incrementally accumulates, abstracts, and retrieves environmental states and decision patterns. CER enables online adaptation without fine-tuning or gradient updates. It constructs a lightweight, context-window-based memory supporting experience synthesis and retrieval-augmented reasoning, and is fully compatible with arbitrary black-box LLM agents. Evaluated on VisualWebArena, CER achieves a 31.9% task success rate. On WebArena, it improves upon the GPT-4o baseline by 51.0%, reaching a 36.7% average success rate—demonstrating both effectiveness and strong generalization across diverse web environments.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) agents have been applied to sequential decision-making tasks such as web navigation, but without any environment-specific experiences, they often fail in these complex tasks. Moreover, current LLM agents are not designed to continually learn from past experiences during inference time, which could be crucial for them to gain these environment-specific experiences. To address this, we propose Contextual Experience Replay (CER), a training-free framework to enable efficient self-improvement for language agents in their context window. Specifically, CER accumulates and synthesizes past experiences into a dynamic memory buffer. These experiences encompass environment dynamics and common decision-making patterns, allowing the agents to retrieve and augment themselves with relevant knowledge in new tasks, enhancing their adaptability in complex environments. We evaluate CER on the challenging WebArena and VisualWebArena benchmarks. On VisualWebArena, CER achieves a competitive performance of 31.9%. On WebArena, CER also gets a competitive average success rate of 36.7%, relatively improving the success rate of the GPT-4o agent baseline by 51.0%. We also conduct a comprehensive analysis on it to prove its efficiency, validity and understand it better.
Problem

Research questions and friction points this paper is trying to address.

LLM agents fail in complex sequential decision-making tasks without experience
Current LLM agents lack continual learning during inference time
Proposes CER framework for self-improvement via dynamic memory of past experiences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Contextual Experience Replay (CER) framework
Accumulates past experiences in dynamic memory
Retrieves relevant knowledge for new tasks
🔎 Similar Papers
No similar papers found.
Y
Yitao Liu
Princeton University, The University of Hong Kong
Chenglei Si
Chenglei Si
Stanford University
Large Language ModelsAI Scientist
K
Karthik R. Narasimhan
Princeton University
S
Shunyu Yao
Princeton University