Action-Constrained Imitation Learning

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the Action-Constrained Imitation Learning (ACIL) problem: standard imitation learning fails when the imitator’s action space is strictly smaller than the expert’s, leading to state-occupancy distribution mismatch. To resolve this, we propose the first ACIL framework, whose core innovation is a Dynamic Time Warping (DTW)-based trajectory alignment mechanism that maps expert demonstrations onto feasible surrogate trajectories satisfying the imitator’s action constraints. We further formulate alignment as a Model Predictive Control (MPC) planning problem to generate high-quality proxy demonstration data. Experiments across diverse robotic control tasks demonstrate significant improvements in sample efficiency; our method consistently outperforms mainstream imitation learning baselines—including Behavior Cloning (BC), Generative Adversarial Imitation Learning (GAIL), and Discriminator-Actor-Critic (DAC)—under action constraints.

Technology Category

Application Category

📝 Abstract
Policy learning under action constraints plays a central role in ensuring safe behaviors in various robot control and resource allocation applications. In this paper, we study a new problem setting termed Action-Constrained Imitation Learning (ACIL), where an action-constrained imitator aims to learn from a demonstrative expert with larger action space. The fundamental challenge of ACIL lies in the unavoidable mismatch of occupancy measure between the expert and the imitator caused by the action constraints. We tackle this mismatch through extit{trajectory alignment} and propose DTWIL, which replaces the original expert demonstrations with a surrogate dataset that follows similar state trajectories while adhering to the action constraints. Specifically, we recast trajectory alignment as a planning problem and solve it via Model Predictive Control, which aligns the surrogate trajectories with the expert trajectories based on the Dynamic Time Warping (DTW) distance. Through extensive experiments, we demonstrate that learning from the dataset generated by DTWIL significantly enhances performance across multiple robot control tasks and outperforms various benchmark imitation learning algorithms in terms of sample efficiency. Our code is publicly available at https://github.com/NYCU-RL-Bandits-Lab/ACRL-Baselines.
Problem

Research questions and friction points this paper is trying to address.

Learning policies under action constraints for safe robot control
Addressing occupancy measure mismatch between expert and imitator
Aligning trajectories via Dynamic Time Warping to overcome action space limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Time Warping distance alignment
Model Predictive Control planning
Surrogate dataset generation
🔎 Similar Papers
No similar papers found.
C
Chia-Han Yeh
National Yang Ming Chiao Tung University, Hsinchu, Taiwan
T
Tse-Sheng Nan
University of Illinois at Urbana-Champaign, Illinois, United States
Risto Vuorio
Risto Vuorio
Reflection AI
reinforcement learning
W
Wei Hung
National Yang Ming Chiao Tung University, Hsinchu, Taiwan
H
Hung-Yen Wu
National Yang Ming Chiao Tung University, Hsinchu, Taiwan
Shao-Hua Sun
Shao-Hua Sun
Assistant Professor at National Taiwan University
Machine LearningRobot LearningReinforcement LearningProgram Synthesis
Ping-Chun Hsieh
Ping-Chun Hsieh
Associate Professor, National Chiao Tung University
Multi-Armed BanditsReinforcement LearningWireless Networks