🤖 AI Summary
LLM agents face two key bottlenecks in sequential decision-making tasks such as web navigation: (1) lack of environment-specific experience, and (2) inability to continuously learn from past interactions during inference. To address these, we propose Contextual Experience Replay (CER), a training-free framework featuring a novel dynamic memory buffer that—during inference—incrementally accumulates, abstracts, and retrieves environmental states and decision patterns. CER enables online adaptation without fine-tuning or gradient updates. It constructs a lightweight, context-window-based memory supporting experience synthesis and retrieval-augmented reasoning, and is fully compatible with arbitrary black-box LLM agents. Evaluated on VisualWebArena, CER achieves a 31.9% task success rate. On WebArena, it improves upon the GPT-4o baseline by 51.0%, reaching a 36.7% average success rate—demonstrating both effectiveness and strong generalization across diverse web environments.
📝 Abstract
Large language model (LLM) agents have been applied to sequential decision-making tasks such as web navigation, but without any environment-specific experiences, they often fail in these complex tasks. Moreover, current LLM agents are not designed to continually learn from past experiences during inference time, which could be crucial for them to gain these environment-specific experiences. To address this, we propose Contextual Experience Replay (CER), a training-free framework to enable efficient self-improvement for language agents in their context window. Specifically, CER accumulates and synthesizes past experiences into a dynamic memory buffer. These experiences encompass environment dynamics and common decision-making patterns, allowing the agents to retrieve and augment themselves with relevant knowledge in new tasks, enhancing their adaptability in complex environments. We evaluate CER on the challenging WebArena and VisualWebArena benchmarks. On VisualWebArena, CER achieves a competitive performance of 31.9%. On WebArena, CER also gets a competitive average success rate of 36.7%, relatively improving the success rate of the GPT-4o agent baseline by 51.0%. We also conduct a comprehensive analysis on it to prove its efficiency, validity and understand it better.