🤖 AI Summary
This work addresses the practical bottleneck in imitation learning—scarce access to expert action labels—by systematically studying “Learning from Observation” (LfO), which relies solely on expert state sequences. We propose the first unified taxonomy for LfO, organizing methods along two orthogonal dimensions: modeling objectives (state reconstruction, latent variable inference, policy alignment) and algorithmic mechanisms (integration with RL, model-based prediction, or hierarchical architectures). We establish, for the first time, rigorous theoretical connections between LfO and offline RL, model-based RL, and hierarchical RL. Our analysis precisely characterizes key assumptions, fundamental performance limits, and domain-specific applicability conditions for each method class. Furthermore, we identify major open challenges—including ambiguity in inverse dynamics and distributional shift—and propose empirically verifiable pathways toward resolution. This work lays a structured conceptual foundation for LfO, advancing its development toward greater robustness, interpretability, and real-world applicability.
📝 Abstract
Imitation Learning (IL) algorithms offer an efficient way to train an agent by mimicking an expert's behavior without requiring a reward function. IL algorithms often necessitate access to state and action information from expert demonstrations. Although expert actions can provide detailed guidance, requiring such action information may prove impractical for real-world applications where expert actions are difficult to obtain. To address this limitation, the concept of learning from observation (LfO) or state-only imitation learning (SOIL) has recently gained attention, wherein the imitator only has access to expert state visitation information. In this paper, we present a framework for LfO and use it to survey and classify existing LfO methods in terms of their trajectory construction, assumptions and algorithm's design choices. This survey also draws connections between several related fields like offline RL, model-based RL and hierarchical RL. Finally, we use our framework to identify open problems and suggest future research directions.