A Robust Model-Based Approach for Continuous-Time Policy Evaluation with Unknown L'evy Process Dynamics

📅 2025-04-02
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
本文提出一种基于模型的连续时间策略评估方法,解决在未知Lévy过程动态下的策略评估问题,结合最大似然估计和迭代尾修正机制,提高系数恢复的稳定性与准确性。

Technology Category

Application Category

📝 Abstract
This paper develops a model-based framework for continuous-time policy evaluation (CTPE) in reinforcement learning, incorporating both Brownian and L'evy noise to model stochastic dynamics influenced by rare and extreme events. Our approach formulates the policy evaluation problem as solving a partial integro-differential equation (PIDE) for the value function with unknown coefficients. A key challenge in this setting is accurately recovering the unknown coefficients in the stochastic dynamics, particularly when driven by L'evy processes with heavy tail effects. To address this, we propose a robust numerical approach that effectively handles both unbiased and censored trajectory datasets. This method combines maximum likelihood estimation with an iterative tail correction mechanism, improving the stability and accuracy of coefficient recovery. Additionally, we establish a theoretical bound for the policy evaluation error based on coefficient recovery error. Through numerical experiments, we demonstrate the effectiveness and robustness of our method in recovering heavy-tailed L'evy dynamics and verify the theoretical error analysis in policy evaluation.
Problem

Research questions and friction points this paper is trying to address.

Develops model-based CTPE with Brownian and Lévy noise
Solves PIDE for value function with unknown coefficients
Handles unbiased and censored data for coefficient recovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-based framework for continuous-time policy evaluation
Robust numerical approach with maximum likelihood estimation
Iterative tail correction mechanism for stability
🔎 Similar Papers
No similar papers found.