WIMLE: Uncertainty-Aware World Models with IMLE for Sample-Efficient Continuous Control

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in model-based reinforcement learning—namely, error accumulation due to model bias, the inability of unimodal world models to capture multimodal dynamics, and overconfident predictions—by introducing implicit maximum likelihood estimation (IMLE) into world model construction for the first time. The proposed stochastic world model effectively captures multimodal environmental dynamics and quantifies predictive uncertainty through ensemble learning and latent-space sampling. During training, synthetic transition data are confidence-weighted to mitigate model bias. Evaluated across 40 continuous control tasks, the method significantly outperforms state-of-the-art baselines, achieving over 50% improvement in sample efficiency on the Humanoid-run task and successfully solving 8 out of 14 tasks in the HumanoidBench suite, substantially surpassing existing approaches.

Technology Category

Application Category

📝 Abstract
Model-based reinforcement learning promises strong sample efficiency but often underperforms in practice due to compounding model error, unimodal world models that average over multi-modal dynamics, and overconfident predictions that bias learning. We introduce WIMLE, a model-based method that extends Implicit Maximum Likelihood Estimation (IMLE) to the model-based RL framework to learn stochastic, multi-modal world models without iterative sampling and to estimate predictive uncertainty via ensembles and latent sampling. During training, WIMLE weights each synthetic transition by its predicted confidence, preserving useful model rollouts while attenuating bias from uncertain predictions and enabling stable learning. Across $40$ continuous-control tasks spanning DeepMind Control, MyoSuite, and HumanoidBench, WIMLE achieves superior sample efficiency and competitive or better asymptotic performance than strong model-free and model-based baselines. Notably, on the challenging Humanoid-run task, WIMLE improves sample efficiency by over $50$\% relative to the strongest competitor, and on HumanoidBench it solves $8$ of $14$ tasks (versus $4$ for BRO and $5$ for SimbaV2). These results highlight the value of IMLE-based multi-modality and uncertainty-aware weighting for stable model-based RL.
Problem

Research questions and friction points this paper is trying to address.

model-based reinforcement learning
compounding model error
multi-modal dynamics
overconfident predictions
sample efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Implicit Maximum Likelihood Estimation
multi-modal world models
uncertainty-aware weighting
model-based reinforcement learning
sample-efficient control
🔎 Similar Papers
No similar papers found.
M
Mehran Aghabozorgi
Apex Lab, School of Computing Science, Simon Fraser University
A
Alireza Moazeni
Apex Lab, School of Computing Science, Simon Fraser University
Y
Yanshu Zhang
Apex Lab, School of Computing Science, Simon Fraser University
Ke Li
Ke Li
Simon Fraser University
Machine LearningComputer VisionAlgorithms