🤖 AI Summary
Visual reinforcement learning struggles with zero-shot generalization to unseen environments. Method: This paper proposes ALDA, a novel offline RL framework that integrates associative memory mechanisms and latent-space disentangled representation learning—eschewing conventional reliance on large-scale data augmentation or environment interaction. We theoretically establish that disentanglement is the fundamental prerequisite for zero-shot generalization, whereas data augmentation merely provides a weak approximation thereof. ALDA models environment-invariant dynamics via associative memory and seamlessly integrates with standard off-policy algorithms (e.g., SAC) for end-to-end training. Results: On diverse complex task variants, ALDA significantly outperforms data-augmentation baselines in zero-shot generalization performance, while simultaneously improving training stability and reducing computational overhead.
📝 Abstract
Generalizing vision-based reinforcement learning (RL) agents to novel environments remains a difficult and open challenge. Current trends are to collect large-scale datasets or use data augmentation techniques to prevent overfitting and improve downstream generalization. However, the computational and data collection costs increase exponentially with the number of task variations and can destabilize the already difficult task of training RL agents. In this work, we take inspiration from recent advances in computational neuroscience and propose a model, Associative Latent DisentAnglement (ALDA), that builds on standard off-policy RL towards zero-shot generalization. Specifically, we revisit the role of latent disentanglement in RL and show how combining it with a model of associative memory achieves zero-shot generalization on difficult task variations without relying on data augmentation. Finally, we formally show that data augmentation techniques are a form of weak disentanglement and discuss the implications of this insight.