π€ AI Summary
This work proposes World2Act, a novel framework that addresses the limitations of existing world modelβbased visuomotor policies, which rely on pixel-level supervision and are thus susceptible to generation artifacts and variations in task duration. World2Act is the first to align actions with the dynamic latent representations of a world model rather than in pixel space, and it integrates a large language model (LLM) to automatically decompose high-level skill instructions into executable sub-skills. This enables the construction of a composable skill-based world model capable of handling tasks of arbitrary length. By combining contrastive learning with LLM-driven skill decomposition, the method achieves state-of-the-art performance on the RoboCasa-Skill and LIBERO-Skill benchmarks, improving real-world robotic task success rates by 6.7% and significantly enhancing cross-task generalization for embodied agents.
π Abstract
World Models (WMs) have emerged as a promising approach for post-training Vision-Language-Action (VLA) policies to improve robustness and generalization under environmental changes. However, most WM-based post-training methods rely on pixel-space supervision, making policies sensitive to pixel-level artifacts and hallucination from imperfect WM rollouts. We introduce World2Act, a post-training framework that aligns VLA actions directly with WM video-dynamics latents using a contrastive matching objective, reducing dependence on pixels. Post-training performance is tied to rollout quality, yet current WMs struggle with arbitrary-length video generation as they are mostly trained on fixed-length clips while robotic execution durations vary widely. To address this, we propose an automatic LLM-based skill-decomposition pipeline that segments high-level instructions into low-level prompts. Our pipeline produces RoboCasa-Skill and LIBERO-Skill, supporting skill-compositional WMs that remain temporally consistent across diverse task horizons. Empirically, applying World2Act to VLAs like GR00T-N1.6 and Cosmos Policy achieves state-of-the-art results on RoboCasa and LIBERO, and improves real-world performance by 6.7%, enhancing embodied agent generalization.