π€ AI Summary
Addressing the generalization bottleneck of natural-language-driven autonomous agents in offline, few-shot, and unlabeled real-world settings, this paper proposes TEDUOβa novel training paradigm that endows large language models (LLMs) with dual roles: offline data augmenter and zero-shot policy generalizer. TEDUO integrates LLMsβ instruction-following capability and world priors into an offline reinforcement learning framework, eliminating the need for human annotations or online interaction. Its core innovation lies in decoupling language-conditioned policy learning from cross-goal and cross-state generalization, thereby dramatically improving sample efficiency and robustness under minimal data constraints. Experiments introduce the first in-the-wild open-scenario evaluation, demonstrating reliable language-conditioned execution on unseen goals and states. TEDUO establishes a scalable new paradigm for decision-making agents in low-resource environments.
π Abstract
To develop autonomous agents capable of executing complex, multi-step decision-making tasks as specified by humans in natural language, existing reinforcement learning approaches typically require expensive labeled datasets or access to real-time experimentation. Moreover, conventional methods often face difficulties in generalizing to unseen goals and states, thereby limiting their practical applicability. This paper presents TEDUO, a novel training pipeline for offline language-conditioned policy learning. TEDUO operates on easy-to-obtain, unlabeled datasets and is suited for the so-called in-the-wild evaluation, wherein the agent encounters previously unseen goals and states. To address the challenges posed by such data and evaluation settings, our method leverages the prior knowledge and instruction-following capabilities of large language models (LLMs) to enhance the fidelity of pre-collected offline data and enable flexible generalization to new goals and states. Empirical results demonstrate that the dual role of LLMs in our framework-as data enhancers and generalizers-facilitates both effective and data-efficient learning of generalizable language-conditioned policies.