🤖 AI Summary
Implicit behavioral learning degrades significantly in realistic videos with irrelevant dynamic distractors, severely compromising the quality of latent action representations in unsupervised latent-action predictive modeling (LAPO).
Method: We propose LAOM—a novel framework that integrates lightweight supervision (only 2.5% labeled action data) into the LAPO architecture, incorporating linear-probe evaluation, Distracting Control Suite-based distractor modeling, and a supervised-unsupervised co-optimization strategy.
Contribution/Results: LAOM improves latent action representation quality by 8× (measured via linear-probe accuracy) and boosts average downstream task performance by 4.2×. It breaks the conventional “unsupervised pretraining followed by decoding” paradigm, empirically demonstrating that minimal action labels are critical for learning distractor-robust representations.
📝 Abstract
Recently, latent action learning, pioneered by Latent Action Policies (LAPO), have shown remarkable pre-training efficiency on observation-only data, offering potential for leveraging vast amounts of video available on the web for embodied AI. However, prior work has focused on distractor-free data, where changes between observations are primarily explained by ground-truth actions. Unfortunately, real-world videos contain action-correlated distractors that may hinder latent action learning. Using Distracting Control Suite (DCS) we empirically investigate the effect of distractors on latent action learning and demonstrate that LAPO struggle in such scenario. We propose LAOM, a simple LAPO modification that improves the quality of latent actions by 8x, as measured by linear probing. Importantly, we show that providing supervision with ground-truth actions, as few as 2.5% of the full dataset, during latent action learning improves downstream performance by 4.2x on average. Our findings suggest that integrating supervision during Latent Action Models (LAM) training is critical in the presence of distractors, challenging the conventional pipeline of first learning LAM and only then decoding from latent to ground-truth actions.