🤖 AI Summary
In inverse reinforcement learning (IRL), reward function modeling faces a fundamental trade-off between underfitting and overfitting, often exacerbated by reliance on hand-crafted features. To address this, we propose the first model selection framework for IRL grounded in structural risk minimization (SRM). Our approach pioneers the application of Rademacher complexity theory to IRL, deriving an upper bound on this complexity as a principled model complexity regularizer. Under the linear reward assumption, we obtain an explicit analytical solution for the regularizer. The method jointly optimizes policy gradient estimation and empirical risk minimization, eliminating the need for predefined feature engineering. Experiments demonstrate that our framework significantly improves reward model generalization and imitation policy accuracy while reducing computational overhead—empirically validating the theoretical learning guarantees established by our SRM-based formulation.
📝 Abstract
Inverse reinforcement learning (IRL) usually assumes the reward function model is pre-specified as a weighted sum of features and estimates the weighting parameters only. However, how to select features and determine a proper reward model is nontrivial and experience-dependent. A simplistic model is less likely to contain the ideal reward function, while a model with high complexity leads to substantial computation cost and potential overfitting. This paper addresses this trade-off in the model selection for IRL problems by introducing the structural risk minimization (SRM) framework from statistical learning. SRM selects an optimal reward function class from a hypothesis set minimizing both estimation error and model complexity. To formulate an SRM scheme for IRL, we estimate the policy gradient from given demonstration as the empirical risk, and establish the upper bound of Rademacher complexity as the model penalty of hypothesis function classes. The SRM learning guarantee is further presented. In particular, we provide the explicit form for the linear weighted sum setting. Simulations demonstrate the performance and efficiency of our algorithm.