World Model Robustness via Surprise Recognition

πŸ“… 2025-11-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In real-world deployments, AI systems are vulnerable to environmental disturbances and out-of-distribution (OOD) sensor noise, leading to policy instability and safety risks. To address this, we propose a robust reinforcement learning framework grounded in β€œsurprisal” awareness within world models: it dynamically detects and suppresses anomalous sensory inputs via multi- or single-representation rejection sampling, thereby enhancing world model stability under unknown perturbations. The method is architecture-agnostic, seamlessly integrating with state-of-the-art world models including DreamerV3 and Cosmos. Extensive evaluation in CARLA and Safety Gymnasium demonstrates that our approach significantly preserves policy performance across diverse noise types and intensities, validating both its effectiveness and generalizability. The implementation is publicly available.

Technology Category

Application Category

πŸ“ Abstract
AI systems deployed in the real world must contend with distractions and out-of-distribution (OOD) noise that can destabilize their policies and lead to unsafe behavior. While robust training can reduce sensitivity to some forms of noise, it is infeasible to anticipate all possible OOD conditions. To mitigate this issue, we develop an algorithm that leverages a world model's inherent measure of surprise to reduce the impact of noise in world model--based reinforcement learning agents. We introduce both multi-representation and single-representation rejection sampling, enabling robustness to settings with multiple faulty sensors or a single faulty sensor. While the introduction of noise typically degrades agent performance, we show that our techniques preserve performance relative to baselines under varying types and levels of noise across multiple environments within self-driving simulation domains (CARLA and Safety Gymnasium). Furthermore, we demonstrate that our methods enhance the stability of two state-of-the-art world models with markedly different underlying architectures: Cosmos and DreamerV3. Together, these results highlight the robustness of our approach across world modeling domains. We release our code at https://github.com/Bluefin-Tuna/WISER .
Problem

Research questions and friction points this paper is trying to address.

Enhances AI robustness to unexpected noise and distractions
Mitigates performance degradation from faulty sensors in RL agents
Improves stability of world models under out-of-distribution conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages world model's surprise measure for noise reduction
Introduces multi and single representation rejection sampling
Enhances robustness across diverse world model architectures
πŸ”Ž Similar Papers
No similar papers found.
Geigh Zollicoffer
Geigh Zollicoffer
PhD Student, Georgia Institute of Technology
T
Tanush Chopra
Georgia Institute of Technology
M
Mingkuan Yan
Georgia Institute of Technology
X
Xiaoxu Ma
Georgia Institute of Technology
K
Kenneth Eaton
Georgia Institute of Technology
Mark Riedl
Mark Riedl
Professor of Computing, Georgia Institute of Technology
Artificial intelligenceMachine LearningStorytellingExplainable AISafe AI