🤖 AI Summary
This work addresses key challenges in model-based reinforcement learning—namely, error accumulation due to model bias, the inability of unimodal world models to capture multimodal dynamics, and overconfident predictions—by introducing implicit maximum likelihood estimation (IMLE) into world model construction for the first time. The proposed stochastic world model effectively captures multimodal environmental dynamics and quantifies predictive uncertainty through ensemble learning and latent-space sampling. During training, synthetic transition data are confidence-weighted to mitigate model bias. Evaluated across 40 continuous control tasks, the method significantly outperforms state-of-the-art baselines, achieving over 50% improvement in sample efficiency on the Humanoid-run task and successfully solving 8 out of 14 tasks in the HumanoidBench suite, substantially surpassing existing approaches.
📝 Abstract
Model-based reinforcement learning promises strong sample efficiency but often underperforms in practice due to compounding model error, unimodal world models that average over multi-modal dynamics, and overconfident predictions that bias learning. We introduce WIMLE, a model-based method that extends Implicit Maximum Likelihood Estimation (IMLE) to the model-based RL framework to learn stochastic, multi-modal world models without iterative sampling and to estimate predictive uncertainty via ensembles and latent sampling. During training, WIMLE weights each synthetic transition by its predicted confidence, preserving useful model rollouts while attenuating bias from uncertain predictions and enabling stable learning. Across $40$ continuous-control tasks spanning DeepMind Control, MyoSuite, and HumanoidBench, WIMLE achieves superior sample efficiency and competitive or better asymptotic performance than strong model-free and model-based baselines. Notably, on the challenging Humanoid-run task, WIMLE improves sample efficiency by over $50$\% relative to the strongest competitor, and on HumanoidBench it solves $8$ of $14$ tasks (versus $4$ for BRO and $5$ for SimbaV2). These results highlight the value of IMLE-based multi-modality and uncertainty-aware weighting for stable model-based RL.