🤖 AI Summary
Continuous-time reinforcement learning (CTRL) struggles to adapt to varying difficulty levels of dynamic environments. Method: This paper proposes an instance-dependent learning framework that abandons conventional system dynamics modeling; instead, it directly learns the state marginal density via maximum likelihood estimation and couples it with an adaptive stochastic measurement scheduling mechanism. Using universal function approximators, the framework jointly optimizes density estimation and policy learning. Contribution/Results: It establishes, for the first time, a verifiable instance-dependent performance guarantee. Theoretical analysis yields a tight regret bound explicitly dependent on reward variance and measurement precision. Crucially, at the optimal observation frequency, the regret decouples from the specific measurement policy, substantially improving sample efficiency. The core innovation lies in replacing dynamics modeling with state-density estimation, enabling automatic adaptation to environmental complexity.
📝 Abstract
Continuous-time reinforcement learning (CTRL) provides a natural framework for sequential decision-making in dynamic environments where interactions evolve continuously over time. While CTRL has shown growing empirical success, its ability to adapt to varying levels of problem difficulty remains poorly understood. In this work, we investigate the instance-dependent behavior of CTRL and introduce a simple, model-based algorithm built on maximum likelihood estimation (MLE) with a general function approximator. Unlike existing approaches that estimate system dynamics directly, our method estimates the state marginal density to guide learning. We establish instance-dependent performance guarantees by deriving a regret bound that scales with the total reward variance and measurement resolution. Notably, the regret becomes independent of the specific measurement strategy when the observation frequency adapts appropriately to the problem's complexity. To further improve performance, our algorithm incorporates a randomized measurement schedule that enhances sample efficiency without increasing measurement cost. These results highlight a new direction for designing CTRL algorithms that automatically adjust their learning behavior based on the underlying difficulty of the environment.