🤖 AI Summary
This work addresses reinforcement learning with submodular reward functions—i.e., those exhibiting diminishing marginal returns—with the goal of efficiently computing an optimal policy that maximizes long-term cumulative reward. To overcome the high computational cost and poor scalability of conventional methods, we propose a pruning-based submodular graph framework for policy optimization: it explicitly models submodular dependencies among actions, introduces a provably approximate algorithm with guaranteed performance bounds, and rigorously analyzes its time and space complexity. The approach preserves the theoretical approximation ratio while substantially reducing computational overhead. Empirical evaluation on standard RL benchmarks demonstrates that our method achieves higher cumulative rewards than state-of-the-art baselines, validating its effectiveness, scalability, and practical utility.
📝 Abstract
In Reinforcement Learning (abbreviated as RL), an agent interacts with the environment via a set of possible actions, and a reward is generated from some unknown distribution. The task here is to find an optimal set of actions such that the reward after a certain time step gets maximized. In a traditional setup, the reward function in an RL Problem is considered additive. However, in reality, there exist many problems, including path planning, coverage control, etc., the reward function follows the diminishing return, which can be modeled as a submodular function. In this paper, we study a variant of the RL Problem where the reward function is submodular, and our objective is to find an optimal policy such that this reward function gets maximized. We have proposed a pruned submodularity graph-based approach that provides a provably approximate solution in a feasible computation time. The proposed approach has been analyzed to understand its time and space requirements as well as a performance guarantee. We have experimented with a benchmark agent-environment setup, which has been used for similar previous studies, and the results are reported. From the results, we observe that the policy obtained by our proposed approach leads to more reward than the baseline methods.