Robust Exploratory Stopping under Ambiguity in Reinforcement Learning

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the optimal stopping problem in reinforcement learning under model uncertainty, focusing on robust exploratory decision-making when the agent faces epistemic ambiguity—i.e., potential misspecification of the environment, necessitating consideration of multiple probability measures relative to a reference measure. Methodologically, ambiguity is formalized as an entropy-regularized control problem within the *g*-expectation framework; a Bernoulli-type stochastic control variable is introduced to jointly encode the “continue/stop” and “explore/exploit” decisions. Leveraging backward stochastic differential equations (BSDEs), we derive a learnable optimal policy and propose a policy iteration algorithm for efficient computation. Contributions include: (i) a unified framework integrating robustness, exploration, and optimal stopping; (ii) a tractable BSDE-based characterization of the optimal policy; and (iii) empirical validation demonstrating strong convergence and robustness across varying ambiguity levels and exploration intensities—significantly enhancing decision safety and exploration efficiency in uncertain environments.

Technology Category

Application Category

📝 Abstract
We propose and analyze a continuous-time robust reinforcement learning framework for optimal stopping problems under ambiguity. In this framework, an agent chooses a stopping rule motivated by two objectives: robust decision-making under ambiguity and learning about the unknown environment. Here, ambiguity refers to considering multiple probability measures dominated by a reference measure, reflecting the agent's awareness that the reference measure representing her learned belief about the environment would be erroneous. Using the $g$-expectation framework, we reformulate an optimal stopping problem under ambiguity as an entropy-regularized optimal control problem under ambiguity, with Bernoulli distributed controls to incorporate exploration into the stopping rules. We then derive the optimal Bernoulli distributed control characterized by backward stochastic differential equations. Moreover, we establish a policy iteration theorem and implement it as a reinforcement learning algorithm. Numerical experiments demonstrate the convergence and robustness of the proposed algorithm across different levels of ambiguity and exploration.
Problem

Research questions and friction points this paper is trying to address.

Robust reinforcement learning for optimal stopping under ambiguity
Reformulating stopping problems as entropy-regularized control problems
Developing exploration-incorporated stopping rules using BSDE characterization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous-time robust reinforcement learning for stopping
Reformulated optimal stopping as entropy-regularized control
Derived optimal exploration via backward stochastic equations
🔎 Similar Papers
No similar papers found.