Imagine Beyond! Distributionally Robust Auto-Encoding for State Space Coverage in Online Reinforcement Learning

šŸ“… 2025-05-23
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
In online goal-conditioned reinforcement learning (GCRL) with visual observations, state representation coverage is insufficient due to policy-induced bias in conventional autoencoding methods—causing latent space localization—and exacerbated by intrinsic-motivation-driven sampling, which further skews coverage. Method: We propose a distributionally robust autoencoding framework, the first to integrate distributionally robust optimization (DRO) into online GCRL representation learning. It introduces a learnable adversarial neural weighter that dynamically reweights VAE training samples to promote prospective, uniform coverage of unseen state regions. The method unifies β-VAE and DRO without requiring pretraining or environmental priors. Contribution/Results: Evaluated on challenging exploration tasks—including maze navigation and obstacle-avoidance robotics—the approach significantly improves state-space coverage and downstream policy performance, demonstrating superior generalization over existing autoencoding baselines.

Technology Category

Application Category

šŸ“ Abstract
Goal-Conditioned Reinforcement Learning (GCRL) enables agents to autonomously acquire diverse behaviors, but faces major challenges in visual environments due to high-dimensional, semantically sparse observations. In the online setting, where agents learn representations while exploring, the latent space evolves with the agent's policy, to capture newly discovered areas of the environment. However, without incentivization to maximize state coverage in the representation, classical approaches based on auto-encoders may converge to latent spaces that over-represent a restricted set of states frequently visited by the agent. This is exacerbated in an intrinsic motivation setting, where the agent uses the distribution encoded in the latent space to sample the goals it learns to master. To address this issue, we propose to progressively enforce distributional shifts towards a uniform distribution over the full state space, to ensure a full coverage of skills that can be learned in the environment. We introduce DRAG (Distributionally Robust Auto-Encoding for GCRL), a method that combines the $eta$-VAE framework with Distributionally Robust Optimization. DRAG leverages an adversarial neural weighter of training states of the VAE, to account for the mismatch between the current data distribution and unseen parts of the environment. This allows the agent to construct semantically meaningful latent spaces beyond its immediate experience. Our approach improves state space coverage and downstream control performance on hard exploration environments such as mazes and robotic control involving walls to bypass, without pre-training nor prior environment knowledge.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited state coverage in Goal-Conditioned Reinforcement Learning
Prevents latent space over-representation of frequent states
Ensures uniform state space coverage for diverse skill learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enforces uniform state space coverage
Combines β-VAE with robust optimization
Uses adversarial neural weighter
šŸ”Ž Similar Papers
No similar papers found.
N
Nicolas Castanet
ISIR, MLIA, Sorbonne UniversitƩ, Paris, France
Olivier Sigaud
Olivier Sigaud
Professor in Computer Science, Sorbonne UniversitƩ
deep reinforcement learningartificial intelligencemachine learningdevelopmental roboticscomputational neuroscience
S
Sylvain Lamprier
LERIA, UniversitĆ© d’Angers, Angers, France