Non-Adversarial Inverse Reinforcement Learning via Successor Feature Matching

📅 2024-11-11
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Conventional adversarial inverse reinforcement learning (IRL) suffers from high computational cost, training instability, and reliance on expert action labels. Method: We propose a non-adversarial IRL framework that bypasses explicit reward function learning; instead, it directly optimizes the policy to match the successor features of expert trajectories. Crucially, it requires only a single unlabeled state-only demonstration trajectory—eliminating dependence on action supervision and thus overcoming a key limitation of behavior cloning (BC). Technically, we formulate a differentiable policy gradient objective based on the linear decomposition of successor features, fully compatible with standard actor-critic architectures. Results: Our method achieves significant performance gains over state-of-the-art IRL and BC baselines across diverse control tasks, marking the first IRL approach that is reward-free, action-label-free, and driven by a single state-only demonstration—while ensuring training stability and sample efficiency.

Technology Category

Application Category

📝 Abstract
In inverse reinforcement learning (IRL), an agent seeks to replicate expert demonstrations through interactions with the environment. Traditionally, IRL is treated as an adversarial game, where an adversary searches over reward models, and a learner optimizes the reward through repeated RL procedures. This game-solving approach is both computationally expensive and difficult to stabilize. In this work, we propose a novel approach to IRL by direct policy optimization: exploiting a linear factorization of the return as the inner product of successor features and a reward vector, we design an IRL algorithm by policy gradient descent on the gap between the learner and expert features. Our non-adversarial method does not require learning a reward function and can be solved seamlessly with existing actor-critic RL algorithms. Remarkably, our approach works in state-only settings without expert action labels, a setting which behavior cloning (BC) cannot solve. Empirical results demonstrate that our method learns from as few as a single expert demonstration and achieves improved performance on various control tasks.
Problem

Research questions and friction points this paper is trying to address.

Avoids adversarial game in inverse reinforcement learning
Eliminates need for reward function learning
Works with state-only expert demonstrations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-adversarial IRL via successor feature matching
Direct policy optimization without reward learning
Works with state-only expert demonstrations
🔎 Similar Papers
No similar papers found.