🤖 AI Summary
To address high-cost and partially observable Markov decision problems, this paper introduces the Observation-Restricted MDP (ORMDP) framework, which explicitly models both *whether to observe* and *which dimension to observe* as learnable policy actions—marking the first such formulation. Our method decouples perception and control policies and proposes a model-free, end-to-end joint optimization architecture, thereby overcoming the limitations of conventional fully observable assumptions and fixed observation schedules. Leveraging iterative policy training and deep reinforcement learning, we evaluate the framework on HeartPole, a medical simulation environment. Results demonstrate an average 32.7% reduction in observation cost and a 21.4% improvement in task completion efficiency over baseline methods. These gains validate ORMDP’s effectiveness and generalizability under realistic observational cost constraints.
📝 Abstract
In many practical applications, decision-making processes must balance the costs of acquiring information with the benefits it provides. Traditional control systems often assume full observability, an unrealistic assumption when observations are expensive. We tackle the challenge of simultaneously learning observation and control strategies in such cost-sensitive environments by introducing the Observation-Constrained Markov Decision Process (OCMDP), where the policy influences the observability of the true state. To manage the complexity arising from the combined observation and control actions, we develop an iterative, model-free deep reinforcement learning algorithm that separates the sensing and control components of the policy. This decomposition enables efficient learning in the expanded action space by focusing on when and what to observe, as well as determining optimal control actions, without requiring knowledge of the environment's dynamics. We validate our approach on a simulated diagnostic task and a realistic healthcare environment using HeartPole. Given both scenarios, the experimental results demonstrate that our model achieves a substantial reduction in observation costs on average, significantly outperforming baseline methods by a notable margin in efficiency.