From Observation to Action: Latent Action-based Primitive Segmentation for VLA Pre-training in Industrial Settings

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of unsupervised extraction of semantically consistent action primitives from continuous industrial video streams to support Vision-Language-Action (VLA) model pretraining. Methodologically, we propose an end-to-end automated framework featuring a lightweight motion tokenizer to encode dynamic behaviors, introduce a novel unsupervised metric—“latent action energy”—for action segment boundary detection, and leverage vision-language models for semantic clustering and consistency evaluation of action primitives. Key contributions include: (1) the first fully automated pipeline converting unstructured industrial videos into VLA pretraining data; (2) the proposal of “latent action energy,” an interpretable and scalable measure of action saliency; and (3) empirical validation on public benchmarks and a newly curated motor assembly dataset, demonstrating effective segmentation and high semantic consistency of generated action primitives—establishing a scalable data foundation for embodied AI in manufacturing.

Technology Category

Application Category

📝 Abstract
We present a novel unsupervised framework to unlock vast unlabeled human demonstration data from continuous industrial video streams for Vision-Language-Action (VLA) model pre-training. Our method first trains a lightweight motion tokenizer to encode motion dynamics, then employs an unsupervised action segmenter leveraging a novel "Latent Action Energy" metric to discover and segment semantically coherent action primitives. The pipeline outputs both segmented video clips and their corresponding latent action sequences, providing structured data directly suitable for VLA pre-training. Evaluations on public benchmarks and a proprietary electric motor assembly dataset demonstrate effective segmentation of key tasks performed by humans at workstations. Further clustering and quantitative assessment via a Vision-Language Model confirm the semantic coherence of the discovered action primitives. To our knowledge, this is the first fully automated end-to-end system for extracting and organizing VLA pre-training data from unstructured industrial videos, offering a scalable solution for embodied AI integration in manufacturing.
Problem

Research questions and friction points this paper is trying to address.

Segmenting continuous industrial videos into coherent action primitives
Automating VLA pre-training data extraction from unlabeled human demonstrations
Enabling scalable embodied AI integration in manufacturing settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised motion tokenizer encodes video dynamics
Latent Action Energy metric segments action primitives
Automated pipeline extracts structured VLA training data
🔎 Similar Papers
No similar papers found.