Latent Wasserstein Adversarial Imitation Learning

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the scarcity of expert demonstrations with action labels in real-world imitation learning by proposing a novel adversarial imitation learning approach that requires only state-only expert trajectories. The method leverages a pretrained intent-conditioned value function to construct a dynamics-aware latent space, wherein the state distributions of the expert and the learned policy are aligned using the Wasserstein distance. By operating in this structured latent space, the approach substantially reduces reliance on both the quantity of expert trajectories and the availability of action annotations. Empirical results across multiple MuJoCo environments demonstrate that the method achieves expert-level performance using merely one to a few action-free expert trajectories, outperforming existing Wasserstein-based and adversarial imitation learning techniques.

Technology Category

Application Category

📝 Abstract
Imitation Learning (IL) enables agents to mimic expert behavior by learning from demonstrations. However, traditional IL methods require large amounts of medium-to-high-quality demonstrations as well as actions of expert demonstrations, both of which are often unavailable. To reduce this need, we propose Latent Wasserstein Adversarial Imitation Learning (LWAIL), a novel adversarial imitation learning framework that focuses on state-only distribution matching. It benefits from the Wasserstein distance computed in a dynamics-aware latent space. This dynamics-aware latent space differs from prior work and is obtained via a pre-training stage, where we train the Intention Conditioned Value Function (ICVF) to capture a dynamics-aware structure of the state space using a small set of randomly generated state-only data. We show that this enhances the policy's understanding of state transitions, enabling the learning process to use only one or a few state-only expert episodes to achieve expert-level performance. Through experiments on multiple MuJoCo environments, we demonstrate that our method outperforms prior Wasserstein-based IL methods and prior adversarial IL methods, achieving better results across various tasks.
Problem

Research questions and friction points this paper is trying to address.

Imitation Learning
State-only demonstrations
Expert demonstrations
Wasserstein distance
Adversarial learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent Wasserstein
Adversarial Imitation Learning
State-only Demonstration
Dynamics-aware Latent Space
Intention Conditioned Value Function
🔎 Similar Papers
No similar papers found.
Siqi Yang
Siqi Yang
University of Electronic Science and Technology of China
Generative Speech EnhancementAutomatic Speech RecognitionDiffusion Models
K
Kai Yan
University of Illinois Urbana-Champaign
A
Alexander G. Schwing
University of Illinois Urbana-Champaign
Y
Yu-Xiong Wang
University of Illinois Urbana-Champaign