🤖 AI Summary
Language model agents suffer from low sample efficiency when learning via online interaction in novel environments, severely limiting their applicability in high-cost settings—such as human–agent interaction or physical system reset. To address this, we propose ECHO, the first framework to introduce inverse experience replay for language model agents: it leverages large language models to autonomously generate counterfactual trajectories, retrospectively refines failed interactions to construct surrogate goal trajectories, and employs a compression-based trajectory memory update rule for efficient experience reconstruction and reuse. ECHO integrates prompt-driven subgoal identification, trajectory rewriting, and compact representation storage. On XMiniGrid and PeopleJoinQA benchmarks, ECHO achieves up to 80% performance gains over baselines and significantly outperforms state-of-the-art architectures—including Reflexion and AWM—demonstrating markedly accelerated adaptation to unseen environments.
📝 Abstract
Language model (LM) agents deployed in novel environments often exhibit poor sample efficiency when learning from sequential interactions. This significantly hinders the usefulness of such agents in environments where interaction is costly (for example, when they interact with humans or reset physical systems). While a number of existing LM agent architectures incorporate various mechanisms for experience storage and reflection, they make limited use of LMs' abilities to directly generate or reason about full counterfactual trajectories. We introduce ECHO (Experience Consolidation via Hindsight Optimization), a prompting framework that adapts hindsight experience replay from reinforcement learning for language model agents. ECHO generates optimized trajectories for alternative goals that could have been achieved during failed attempts, effectively creating synthetic positive examples from unsuccessful interactions. Our approach consists of two components: a hindsight rule that uses the language model itself to identify relevant subgoals and generate optimized trajectories, and an update rule that maintains compressed trajectory representations in memory. We evaluate ECHO on stateful versions of XMiniGrid, a text-based navigation and planning benchmark, and PeopleJoinQA, a collaborative information-gathering enterprise simulation. Across both domains, ECHO outperforms vanilla language agent baselines by up to 80%; in XMiniGrid, it also outperforms a number of sophisticated agent architectures including Reflexion and AWM, demonstrating faster adaptation to novel environments through more effective utilization of past experiences.