Model Selection for Inverse Reinforcement Learning via Structural Risk Minimization

📅 2023-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In inverse reinforcement learning (IRL), reward function modeling faces a fundamental trade-off between underfitting and overfitting, often exacerbated by reliance on hand-crafted features. To address this, we propose the first model selection framework for IRL grounded in structural risk minimization (SRM). Our approach pioneers the application of Rademacher complexity theory to IRL, deriving an upper bound on this complexity as a principled model complexity regularizer. Under the linear reward assumption, we obtain an explicit analytical solution for the regularizer. The method jointly optimizes policy gradient estimation and empirical risk minimization, eliminating the need for predefined feature engineering. Experiments demonstrate that our framework significantly improves reward model generalization and imitation policy accuracy while reducing computational overhead—empirically validating the theoretical learning guarantees established by our SRM-based formulation.
📝 Abstract
Inverse reinforcement learning (IRL) usually assumes the reward function model is pre-specified as a weighted sum of features and estimates the weighting parameters only. However, how to select features and determine a proper reward model is nontrivial and experience-dependent. A simplistic model is less likely to contain the ideal reward function, while a model with high complexity leads to substantial computation cost and potential overfitting. This paper addresses this trade-off in the model selection for IRL problems by introducing the structural risk minimization (SRM) framework from statistical learning. SRM selects an optimal reward function class from a hypothesis set minimizing both estimation error and model complexity. To formulate an SRM scheme for IRL, we estimate the policy gradient from given demonstration as the empirical risk, and establish the upper bound of Rademacher complexity as the model penalty of hypothesis function classes. The SRM learning guarantee is further presented. In particular, we provide the explicit form for the linear weighted sum setting. Simulations demonstrate the performance and efficiency of our algorithm.
Problem

Research questions and friction points this paper is trying to address.

Selecting optimal reward function model in IRL
Balancing model complexity and estimation error
Applying SRM framework to IRL model selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Structural Risk Minimization for IRL
Estimates policy gradient as empirical risk
Bounds Rademacher complexity for model penalty
🔎 Similar Papers
No similar papers found.
Chendi Qu
Chendi Qu
Shanghai Jiao Tong University
optimal controlrobotics
J
Jianping He
Department of Automation, Shanghai Jiao Tong University, Shanghai, China
Xiaoming Duan
Xiaoming Duan
Shanghai Jiao Tong University
J
Jiming Chen
State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou, China