Tackling Decision Processes with Non-Cumulative Objectives using Reinforcement Learning

📅 2024-05-22
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses non-cumulative Markov decision processes (NCMDPs), where the objective is to optimize the expectation of an arbitrary function—e.g., maximum reward, Sharpe ratio—of the reward sequence, rather than the conventional discounted cumulative reward. We propose the first general, theoretically rigorous state-augmentation mapping that equivalently transforms any NCMDP into a standard MDP. This reduction enables direct application of classical reinforcement learning algorithms (e.g., DQN, policy gradients) and dynamic programming methods. Empirical evaluation across diverse domains—including control, finance (portfolio optimization), and combinatorial optimization—demonstrates substantial improvements in final performance and training efficiency. Our core contribution is the establishment of a formal theoretical equivalence between NCMDPs and standard MDPs, accompanied by a scalable algorithmic framework for practical implementation. The approach unifies treatment of non-cumulative objectives within the standard RL paradigm while preserving computational tractability and theoretical soundness.

Technology Category

Application Category

📝 Abstract
Markov decision processes (MDPs) are used to model a wide variety of applications ranging from game playing over robotics to finance. Their optimal policy typically maximizes the expected sum of rewards given at each step of the decision process. However, a large class of problems does not fit straightforwardly into this framework: Non-cumulative Markov decision processes (NCMDPs), where instead of the expected sum of rewards, the expected value of an arbitrary function of the rewards is maximized. Example functions include the maximum of the rewards or their mean divided by their standard deviation. In this work, we introduce a general mapping of NCMDPs to standard MDPs. This allows all techniques developed to find optimal policies for MDPs, such as reinforcement learning or dynamic programming, to be directly applied to the larger class of NCMDPs. Focusing on reinforcement learning, we show applications in a diverse set of tasks, including classical control, portfolio optimization in finance, and discrete optimization problems. Given our approach, we can improve both final performance and training time compared to relying on standard MDPs.
Problem

Research questions and friction points this paper is trying to address.

Mapping non-cumulative MDPs to standard MDPs for broader applicability
Enabling RL techniques to optimize arbitrary reward functions in NCMDPs
Improving performance and training efficiency in diverse NCMDP applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Maps NCMDPs to standard MDPs
Applies reinforcement learning techniques
Improves performance and training time
🔎 Similar Papers
No similar papers found.
M
Maximilian Nägele
Max Planck Institute for the Science of Light, Friedrich-Alexander-Universität Erlangen-Nürnberg
J
Jan Olle
Max Planck Institute for the Science of Light
T
Thomas Fösel
Friedrich-Alexander-Universität Erlangen-Nürnberg
R
Remmy Zen
Max Planck Institute for the Science of Light
Florian Marquardt
Florian Marquardt
Max Planck Institute for the Science of Light and University of Erlangen-Nuremberg
OptomechanicsMachine LearningAI for SciencePhysics for AIQuantum Technologies