Towards Generalisable Imitation Learning Through Conditioned Transition Estimation and Online Behaviour Alignment

📅 2026-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing imitation learning methods typically rely on action labels, assume a unique optimal action for each state, and overlook discrepancies in environmental dynamics, making it challenging to learn effective policies from observed trajectories in a fully unsupervised manner. This work proposes UfO, the first entirely unsupervised imitation learning framework, which operates via a two-stage process: first, it infers the teacher’s implicit actions from state transitions through conditional transition estimation; second, it dynamically aligns the agent’s trajectory with the teacher’s behavior using an online behavioral alignment mechanism. By eliminating reliance on action supervision and the single-action assumption, UfO consistently outperforms both the teacher policy and other ILfO approaches across five benchmark environments, achieving the lowest standard deviation and demonstrating superior generalization and stability.

Technology Category

Application Category

📝 Abstract
State-of-the-art imitation learning from observation methods (ILfO) have recently made significant progress, but they still have some limitations: they need action-based supervised optimisation, assume that states have a single optimal action, and tend to apply teacher actions without full consideration of the actual environment state. While the truth may be out there in observed trajectories, existing methods struggle to extract it without supervision. In this work, we propose Unsupervised Imitation Learning from Observation (UfO) that addresses all of these limitations. UfO learns a policy through a two-stage process, in which the agent first obtains an approximation of the teacher's true actions in the observed state transitions, and then refines the learned policy further by adjusting agent trajectories to closely align them with the teacher's. Experiments we conducted in five widely used environments show that UfO not only outperforms the teacher and all other ILfO methods but also displays the smallest standard deviation. This reduction in standard deviation indicates better generalisation in unseen scenarios.
Problem

Research questions and friction points this paper is trying to address.

Imitation Learning from Observation
Unsupervised Learning
Generalisation
Policy Learning
State-Action Alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised Imitation Learning
Conditioned Transition Estimation
Online Behaviour Alignment
Imitation Learning from Observation
Generalisation
🔎 Similar Papers
No similar papers found.