š¤ AI Summary
In online goal-conditioned reinforcement learning (GCRL) with visual observations, state representation coverage is insufficient due to policy-induced bias in conventional autoencoding methodsācausing latent space localizationāand exacerbated by intrinsic-motivation-driven sampling, which further skews coverage. Method: We propose a distributionally robust autoencoding framework, the first to integrate distributionally robust optimization (DRO) into online GCRL representation learning. It introduces a learnable adversarial neural weighter that dynamically reweights VAE training samples to promote prospective, uniform coverage of unseen state regions. The method unifies β-VAE and DRO without requiring pretraining or environmental priors. Contribution/Results: Evaluated on challenging exploration tasksāincluding maze navigation and obstacle-avoidance roboticsāthe approach significantly improves state-space coverage and downstream policy performance, demonstrating superior generalization over existing autoencoding baselines.
š Abstract
Goal-Conditioned Reinforcement Learning (GCRL) enables agents to autonomously acquire diverse behaviors, but faces major challenges in visual environments due to high-dimensional, semantically sparse observations. In the online setting, where agents learn representations while exploring, the latent space evolves with the agent's policy, to capture newly discovered areas of the environment. However, without incentivization to maximize state coverage in the representation, classical approaches based on auto-encoders may converge to latent spaces that over-represent a restricted set of states frequently visited by the agent. This is exacerbated in an intrinsic motivation setting, where the agent uses the distribution encoded in the latent space to sample the goals it learns to master. To address this issue, we propose to progressively enforce distributional shifts towards a uniform distribution over the full state space, to ensure a full coverage of skills that can be learned in the environment. We introduce DRAG (Distributionally Robust Auto-Encoding for GCRL), a method that combines the $eta$-VAE framework with Distributionally Robust Optimization. DRAG leverages an adversarial neural weighter of training states of the VAE, to account for the mismatch between the current data distribution and unseen parts of the environment. This allows the agent to construct semantically meaningful latent spaces beyond its immediate experience. Our approach improves state space coverage and downstream control performance on hard exploration environments such as mazes and robotic control involving walls to bypass, without pre-training nor prior environment knowledge.