Zero-Shot Generalization of Vision-Based RL Without Data Augmentation

📅 2024-10-09
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Visual reinforcement learning struggles with zero-shot generalization to unseen environments. Method: This paper proposes ALDA, a novel offline RL framework that integrates associative memory mechanisms and latent-space disentangled representation learning—eschewing conventional reliance on large-scale data augmentation or environment interaction. We theoretically establish that disentanglement is the fundamental prerequisite for zero-shot generalization, whereas data augmentation merely provides a weak approximation thereof. ALDA models environment-invariant dynamics via associative memory and seamlessly integrates with standard off-policy algorithms (e.g., SAC) for end-to-end training. Results: On diverse complex task variants, ALDA significantly outperforms data-augmentation baselines in zero-shot generalization performance, while simultaneously improving training stability and reducing computational overhead.

Technology Category

Application Category

📝 Abstract
Generalizing vision-based reinforcement learning (RL) agents to novel environments remains a difficult and open challenge. Current trends are to collect large-scale datasets or use data augmentation techniques to prevent overfitting and improve downstream generalization. However, the computational and data collection costs increase exponentially with the number of task variations and can destabilize the already difficult task of training RL agents. In this work, we take inspiration from recent advances in computational neuroscience and propose a model, Associative Latent DisentAnglement (ALDA), that builds on standard off-policy RL towards zero-shot generalization. Specifically, we revisit the role of latent disentanglement in RL and show how combining it with a model of associative memory achieves zero-shot generalization on difficult task variations without relying on data augmentation. Finally, we formally show that data augmentation techniques are a form of weak disentanglement and discuss the implications of this insight.
Problem

Research questions and friction points this paper is trying to address.

Zero-shot generalization in vision-based RL without data augmentation
Overcoming overfitting and high computational costs in RL training
Latent disentanglement combined with associative memory for generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

ALDA model for zero-shot generalization
Latent disentanglement with associative memory
Data augmentation as weak disentanglement
🔎 Similar Papers
No similar papers found.
Sumeet Batra
Sumeet Batra
University of Southern California
MLRoboticsAI
G
Gaurav S. Sukhatme
University of Southern California