S-VAM: Shortcut Video-Action Model by Self-Distilling Geometric and Semantic Foresight

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video-action models struggle to simultaneously achieve real-time inference and high-fidelity visual prediction. To address this challenge, this work proposes a self-distillation-based single-step video-action model that compresses multi-step denoising generation priors into a single forward pass. By integrating diffusion models, vision foundation model representations, and a lightweight decoupler, the method jointly predicts consistent geometric and semantic representations, thereby streamlining and enhancing action prediction efficiency. Evaluated in both simulated and real-world environments, the approach significantly outperforms current state-of-the-art methods, achieving higher accuracy and real-time performance on complex robotic manipulation tasks.

Technology Category

Application Category

📝 Abstract
Video action models (VAMs) have emerged as a promising paradigm for robot learning, owing to their powerful visual foresight for complex manipulation tasks. However, current VAMs, typically relying on either slow multi-step video generation or noisy one-step feature extraction, cannot simultaneously guarantee real-time inference and high-fidelity foresight. To address this limitation, we propose S-VAM, a shortcut video-action model that foresees coherent geometric and semantic representations via a single forward pass. Serving as a stable blueprint, these foreseen representations significantly simplify the action prediction. To enable this efficient shortcut, we introduce a novel self-distillation strategy that condenses structured generative priors of multi-step denoising into one-step inference. Specifically, vision foundation model (VFM) representations extracted from the diffusion model's own multi-step generated videos provide teacher targets. Lightweight decouplers, as students, learn to directly map noisy one-step features to these targets. Extensive experiments in simulation and the real world demonstrate that our S-VAM outperforms state-of-the-art methods, enabling efficient and precise manipulation in complex environments. Our project page is https://haodong-yan.github.io/S-VAM/
Problem

Research questions and friction points this paper is trying to address.

Video Action Models
Real-time Inference
High-fidelity Foresight
Robot Learning
Visual Prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-distillation
video-action model
one-step inference
visual foresight
vision foundation model
🔎 Similar Papers
2024-01-15IEEE Transactions on Information Forensics and SecurityCitations: 0