🤖 AI Summary
To address the vulnerability of video representations to background interference under scarce action label supervision, this paper proposes LAOF, a pseudo-supervised framework that introduces optical flow as an explicit constraint signal for latent action representation learning—marking the first such integration. LAOF leverages optical flow to drive motion modeling, actively suppressing static background distractions and enhancing the robustness of motion-object representations. Under extremely low-label (even zero-label) regimes, LAOF jointly optimizes self-supervised pretraining with optical flow consistency constraints, substantially improving latent action representation quality. Experiments demonstrate that: (i) with zero action labels, LAOF matches the performance of fully supervised baselines; (ii) using only 1% labeled actions, it surpasses state-of-the-art methods trained on 10% labels; and (iii) it consistently outperforms competitors in downstream imitation learning and reinforcement learning tasks. This work establishes a novel paradigm for robust video pretraining in embodied intelligence.
📝 Abstract
Learning latent actions from large-scale videos is crucial for the pre-training of scalable embodied foundation models, yet existing methods often struggle with action-irrelevant distractors. Although incorporating action supervision can alleviate these distractions, its effectiveness is restricted by the scarcity of available action labels. Optical flow represents pixel-level motion between consecutive frames, naturally suppressing background elements and emphasizing moving objects. Motivated by this, we propose robust Latent Action learning with Optical Flow constraints, called LAOF, a pseudo-supervised framework that leverages the agent's optical flow as an action-driven signal to learn latent action representations robust to distractors. Experimental results show that the latent representations learned by LAOF outperform existing methods on downstream imitation learning and reinforcement learning tasks. This superior performance arises from optical flow constraints, which substantially stabilize training and improve the quality of latent representations under extremely label-scarce conditions, while remaining effective as the proportion of action labels increases to 10 percent. Importantly, even without action supervision, LAOF matches or surpasses action-supervised methods trained with 1 percent of action labels.