Learning Skills from Action-Free Videos

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of learning robotic skills and mapping high-level planning to low-level execution in videos without action annotations. We propose the Optical-Flow-guided Skill Abstraction Framework (SOF), the first method to jointly learn composable, plannable, and executable visual skill representations—without action supervision—by modeling motion-aligned latent spaces guided by optical flow. To enhance generalization, we integrate a video-generation prior with a multi-task reinforcement learning transfer mechanism, enabling end-to-end skill acquisition and policy transfer across tasks. Evaluated on multi-task and long-horizon robotic control benchmarks, SOF achieves significant performance gains over prior methods. Results demonstrate the effectiveness and scalability of purely vision-driven skill abstraction, composition, and execution—establishing a foundation for unsupervised, hierarchical robot learning from unstructured video.

Technology Category

Application Category

📝 Abstract
Learning from videos offers a promising path toward generalist robots by providing rich visual and temporal priors beyond what real robot datasets contain. While existing video generative models produce impressive visual predictions, they are difficult to translate into low-level actions. Conversely, latent-action models better align videos with actions, but they typically operate at the single-step level and lack high-level planning capabilities. We bridge this gap by introducing Skill Abstraction from Optical Flow (SOF), a framework that learns latent skills from large collections of action-free videos. Our key idea is to learn a latent skill space through an intermediate representation based on optical flow that captures motion information aligned with both video dynamics and robot actions. By learning skills in this flow-based latent space, SOF enables high-level planning over video-derived skills and allows for easier translation of these skills into actions. Experiments show that our approach consistently improves performance in both multitask and long-horizon settings, demonstrating the ability to acquire and compose skills directly from raw visual data.
Problem

Research questions and friction points this paper is trying to address.

Bridging video generative models and action translation
Learning latent skills from action-free video data
Enabling high-level planning for robot skill composition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning latent skills from action-free videos
Using optical flow as intermediate motion representation
Enabling high-level planning and action translation
🔎 Similar Papers
No similar papers found.
H
Hung-Chieh Fang
National Taiwan University
Kuo-Han Hung
Kuo-Han Hung
Stanford University
roboticsmachine learningnatural language processing
C
Chu-Rong Chen
National Taiwan University
P
Po-Jung Chou
National Taiwan University
C
Chun-Kai Yang
National Taiwan University
P
Po-Chen Ko
National Taiwan University
Y
Yu-Chiang Wang
National Taiwan University,NVIDIA
Yueh-Hua Wu
Yueh-Hua Wu
NVIDIA
Reinforcement LearningMachine Learning
Min-Hung Chen
Min-Hung Chen
Senior Research Scientist @ NVIDIA
Multimodal LearningVideo UnderstandingTransfer LearningComputer VisionDeep Learning
Shao-Hua Sun
Shao-Hua Sun
Assistant Professor at National Taiwan University
Machine LearningRobot LearningReinforcement LearningProgram Synthesis