🤖 AI Summary
This work addresses the scarcity of expert demonstrations with action labels in real-world imitation learning by proposing a novel adversarial imitation learning approach that requires only state-only expert trajectories. The method leverages a pretrained intent-conditioned value function to construct a dynamics-aware latent space, wherein the state distributions of the expert and the learned policy are aligned using the Wasserstein distance. By operating in this structured latent space, the approach substantially reduces reliance on both the quantity of expert trajectories and the availability of action annotations. Empirical results across multiple MuJoCo environments demonstrate that the method achieves expert-level performance using merely one to a few action-free expert trajectories, outperforming existing Wasserstein-based and adversarial imitation learning techniques.
📝 Abstract
Imitation Learning (IL) enables agents to mimic expert behavior by learning from demonstrations. However, traditional IL methods require large amounts of medium-to-high-quality demonstrations as well as actions of expert demonstrations, both of which are often unavailable. To reduce this need, we propose Latent Wasserstein Adversarial Imitation Learning (LWAIL), a novel adversarial imitation learning framework that focuses on state-only distribution matching. It benefits from the Wasserstein distance computed in a dynamics-aware latent space. This dynamics-aware latent space differs from prior work and is obtained via a pre-training stage, where we train the Intention Conditioned Value Function (ICVF) to capture a dynamics-aware structure of the state space using a small set of randomly generated state-only data. We show that this enhances the policy's understanding of state transitions, enabling the learning process to use only one or a few state-only expert episodes to achieve expert-level performance. Through experiments on multiple MuJoCo environments, we demonstrate that our method outperforms prior Wasserstein-based IL methods and prior adversarial IL methods, achieving better results across various tasks.