Sample-efficient and Scalable Exploration in Continuous-Time RL

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the core challenges of low sample efficiency and poor scalability in reinforcement learning (RL) for continuous-time nonlinear ordinary differential equation (ODE) systems. We propose a model-driven continuous-time RL framework that probabilistically models unknown system dynamics using Gaussian processes and Bayesian neural networks. To balance exploration and exploitation, we design a cognitively uncertainty-weighted greedy exploration strategy. For the first time in this setting, we establish theoretical guarantees—specifically, sublinear regret bounds—under both reward-driven and unsupervised learning paradigms. Our method achieves a favorable trade-off between modeling fidelity and computational scalability, significantly reducing sample complexity. Empirical evaluation across multiple deep RL benchmarks demonstrates consistent and substantial performance gains over state-of-the-art baselines, validating both its data efficiency and generalization capability.

Technology Category

Application Category

📝 Abstract
Reinforcement learning algorithms are typically designed for discrete-time dynamics, even though the underlying real-world control systems are often continuous in time. In this paper, we study the problem of continuous-time reinforcement learning, where the unknown system dynamics are represented using nonlinear ordinary differential equations (ODEs). We leverage probabilistic models, such as Gaussian processes and Bayesian neural networks, to learn an uncertainty-aware model of the underlying ODE. Our algorithm, COMBRL, greedily maximizes a weighted sum of the extrinsic reward and model epistemic uncertainty. This yields a scalable and sample-efficient approach to continuous-time model-based RL. We show that COMBRL achieves sublinear regret in the reward-driven setting, and in the unsupervised RL setting (i.e., without extrinsic rewards), we provide a sample complexity bound. In our experiments, we evaluate COMBRL in both standard and unsupervised RL settings and demonstrate that it scales better, is more sample-efficient than prior methods, and outperforms baselines across several deep RL tasks.
Problem

Research questions and friction points this paper is trying to address.

Solving reinforcement learning in continuous-time nonlinear systems
Addressing sample inefficiency in model-based RL exploration
Developing scalable algorithms for continuous control tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Gaussian processes for uncertainty-aware ODE modeling
Greedily maximizes reward and model epistemic uncertainty
Provides scalable sample-efficient continuous-time RL approach
🔎 Similar Papers
No similar papers found.