🤖 AI Summary
This work addresses the ill-posed problem of recovering time-varying reward functions from optimal policies or expert demonstrations in maximum-entropy reinforcement learning. To enhance identifiability, we impose joint sparsity-in-changes and low-rank structural priors on the reward function, formulating reward recovery as a constrained optimization problem that jointly minimizes sparsity-inducing norms and nuclear norm under linear constraints. We propose a polynomial-time exact algorithm based on convex relaxation, integrating sparse optimization, nuclear norm regularization, and linear constraint handling. Experiments across multiple benchmark tasks demonstrate that our method achieves efficient, robust, and generalizable reward recovery—substantially outperforming existing approaches relying on single priors. The framework provides a new paradigm for inverse reinforcement learning, offering both theoretical guarantees and practical efficacy.
📝 Abstract
In this paper, we consider the problem of recovering time-varying reward functions from either optimal policies or demonstrations coming from a max entropy reinforcement learning problem. This problem is highly ill-posed without additional assumptions on the underlying rewards. However, in many applications, the rewards are indeed parsimonious, and some prior information is available. We consider two such priors on the rewards: 1) rewards are mostly constant and they change infrequently, 2) rewards can be represented by a linear combination of a small number of feature functions. We first show that the reward identification problem with the former prior can be recast as a sparsification problem subject to linear constraints. Moreover, we give a polynomial-time algorithm that solves this sparsification problem exactly. Then, we show that identifying rewards representable with the minimum number of features can be recast as a rank minimization problem subject to linear constraints, for which convex relaxations of rank can be invoked. In both cases, these observations lead to efficient optimization-based reward identification algorithms. Several examples are given to demonstrate the accuracy of the recovered rewards as well as their generalizability.