SOMBRL: Scalable and Optimistic Model-Based RL

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In model-based reinforcement learning (MBRL), efficient online exploration under unknown system dynamics remains challenging, particularly for nonlinear, high-dimensional, or vision-based environments. Method: This paper proposes a scalable exploration framework grounded in the principle of optimism under uncertainty (OUU), the first to extend OUU-based exploration to nonlinear dynamical systems. It employs uncertainty-aware dynamics modeling and jointly optimizes an objective combining extrinsic rewards with cognitively weighted epistemic uncertainty, ensuring sublinear cumulative regret. The framework is agnostic to downstream policy optimizers and planners, requiring no modifications to existing algorithms. Results: Evaluated on both simulated benchmarks and a real-world RC car platform, the method consistently outperforms state-of-the-art approaches, demonstrating strong efficacy and generalization across complex, dynamic environments.

Technology Category

Application Category

📝 Abstract
We address the challenge of efficient exploration in model-based reinforcement learning (MBRL), where the system dynamics are unknown and the RL agent must learn directly from online interactions. We propose Scalable and Optimistic MBRL (SOMBRL), an approach based on the principle of optimism in the face of uncertainty. SOMBRL learns an uncertainty-aware dynamics model and greedily maximizes a weighted sum of the extrinsic reward and the agent's epistemic uncertainty. SOMBRL is compatible with any policy optimizers or planners, and under common regularity assumptions on the system, we show that SOMBRL has sublinear regret for nonlinear dynamics in the (i) finite-horizon, (ii) discounted infinite-horizon, and (iii) non-episodic settings. Additionally, SOMBRL offers a flexible and scalable solution for principled exploration. We evaluate SOMBRL on state-based and visual-control environments, where it displays strong performance across all tasks and baselines. We also evaluate SOMBRL on a dynamic RC car hardware and show SOMBRL outperforms the state-of-the-art, illustrating the benefits of principled exploration for MBRL.
Problem

Research questions and friction points this paper is trying to address.

Addresses efficient exploration in model-based reinforcement learning with unknown dynamics
Proposes scalable optimistic MBRL method maximizing reward and epistemic uncertainty
Provides theoretical guarantees and demonstrates strong empirical performance across environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses uncertainty-aware dynamics model for exploration
Greedily maximizes reward and epistemic uncertainty
Provides scalable optimistic model-based reinforcement learning
🔎 Similar Papers
No similar papers found.