🤖 AI Summary
This work proposes a semi-supervised imitation learning approach based on an inverse dynamics model (IDM) in settings with limited action-labeled trajectories and abundant unlabeled trajectories. The IDM predicts actions from state transitions and can either serve directly as a policy (VM-IDM) or generate pseudo-labels for unlabeled data (IDM labeling). Theoretical analysis reveals that IDM outperforms behavioral cloning primarily due to its lower hypothesis class complexity and reduced stochasticity, leading to higher sample efficiency. Building on this insight, the authors enhance the LAPO algorithm and validate the proposed method within a unified video-action prediction (UVA) framework. Both theoretical analysis and empirical results consistently demonstrate the superior sample efficiency of the IDM-based approach.
📝 Abstract
Semi-supervised imitation learning (SSIL) consists in learning a policy from a small dataset of action-labeled trajectories and a much larger dataset of action-free trajectories. Some SSIL methods learn an inverse dynamics model (IDM) to predict the action from the current state and the next state. An IDM can act as a policy when paired with a video model (VM-IDM) or as a label generator to perform behavior cloning on action-free data (IDM labeling). In this work, we first show that VM-IDM and IDM labeling learn the same policy in a limit case, which we call the IDM-based policy. We then argue that the previously observed advantage of IDM-based policies over behavior cloning is due to the superior sample efficiency of IDM learning, which we attribute to two causes: (i) the ground-truth IDM tends to be contained in a lower complexity hypothesis class relative to the expert policy, and (ii) the ground-truth IDM is often less stochastic than the expert policy. We argue these claims based on insights from statistical learning theory and novel experiments, including a study of IDM-based policies using recent architectures for unified video-action prediction (UVA). Motivated by these insights, we finally propose an improved version of the existing LAPO algorithm for latent action policy learning.