LLMs for Generalizable Language-Conditioned Policy Learning under Minimal Data Requirements

πŸ“… 2024-12-09
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Addressing the generalization bottleneck of natural-language-driven autonomous agents in offline, few-shot, and unlabeled real-world settings, this paper proposes TEDUOβ€”a novel training paradigm that endows large language models (LLMs) with dual roles: offline data augmenter and zero-shot policy generalizer. TEDUO integrates LLMs’ instruction-following capability and world priors into an offline reinforcement learning framework, eliminating the need for human annotations or online interaction. Its core innovation lies in decoupling language-conditioned policy learning from cross-goal and cross-state generalization, thereby dramatically improving sample efficiency and robustness under minimal data constraints. Experiments introduce the first in-the-wild open-scenario evaluation, demonstrating reliable language-conditioned execution on unseen goals and states. TEDUO establishes a scalable new paradigm for decision-making agents in low-resource environments.

Technology Category

Application Category

πŸ“ Abstract
To develop autonomous agents capable of executing complex, multi-step decision-making tasks as specified by humans in natural language, existing reinforcement learning approaches typically require expensive labeled datasets or access to real-time experimentation. Moreover, conventional methods often face difficulties in generalizing to unseen goals and states, thereby limiting their practical applicability. This paper presents TEDUO, a novel training pipeline for offline language-conditioned policy learning. TEDUO operates on easy-to-obtain, unlabeled datasets and is suited for the so-called in-the-wild evaluation, wherein the agent encounters previously unseen goals and states. To address the challenges posed by such data and evaluation settings, our method leverages the prior knowledge and instruction-following capabilities of large language models (LLMs) to enhance the fidelity of pre-collected offline data and enable flexible generalization to new goals and states. Empirical results demonstrate that the dual role of LLMs in our framework-as data enhancers and generalizers-facilitates both effective and data-efficient learning of generalizable language-conditioned policies.
Problem

Research questions and friction points this paper is trying to address.

Develop autonomous agents for complex language-specified tasks
Generalize policies to unseen goals and states
Learn robust policies from low-fidelity offline data
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs enhance offline datasets with annotations
LLMs serve as instruction-following agents
Combines LLMs and RL for robust policies
πŸ”Ž Similar Papers
No similar papers found.
T
T. Pouplin
Department of Applied Mathematics and Theoretical Physics, University of Cambridge
Katarzyna Kobalczyk
Katarzyna Kobalczyk
University of Cambridge
Machine LearningArtificial Intelligence
H
Hao Sun
Department of Applied Mathematics and Theoretical Physics, University of Cambridge
M
M. Schaar
Department of Applied Mathematics and Theoretical Physics, University of Cambridge