Video models are zero-shot learners and reasoners

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether video foundation models can serve as unified, general-purpose visual foundation models capable of zero-shot cross-task visual understanding and reasoning. Method: Leveraging Veo 3—a large-scale autoregressive generative video model trained on extensive web-sourced video data—we systematically evaluate its emergent visual capabilities without explicit task-specific supervision or fine-tuning. Contribution/Results: We demonstrate that Veo 3 achieves strong zero-shot generalization across diverse vision tasks—including object segmentation, edge detection, image editing, physical property understanding, maze solving, and symmetry recognition—without any adaptation. This study provides the first empirical evidence that large generative video models, through their spatiotemporal joint modeling architecture, inherently support multi-level visual cognition: perception, world modeling, manipulation, and reasoning. Our findings establish a critical foundation and a novel paradigm for developing unified visual foundation models.

Technology Category

Application Category

📝 Abstract
The remarkable zero-shot capabilities of Large Language Models (LLMs) have propelled natural language processing from task-specific models to unified, generalist foundation models. This transformation emerged from simple primitives: large, generative models trained on web-scale data. Curiously, the same primitives apply to today's generative video models. Could video models be on a trajectory towards general-purpose vision understanding, much like LLMs developed general-purpose language understanding? We demonstrate that Veo 3 can solve a broad variety of tasks it wasn't explicitly trained for: segmenting objects, detecting edges, editing images, understanding physical properties, recognizing object affordances, simulating tool use, and more. These abilities to perceive, model, and manipulate the visual world enable early forms of visual reasoning like maze and symmetry solving. Veo's emergent zero-shot capabilities indicate that video models are on a path to becoming unified, generalist vision foundation models.
Problem

Research questions and friction points this paper is trying to address.

Video models demonstrate zero-shot visual reasoning capabilities
Veo 3 solves diverse untrained tasks like segmentation and editing
Video models evolve toward general-purpose vision foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video models demonstrate zero-shot learning capabilities
Veo 3 solves diverse tasks without explicit training
Generative video models enable visual reasoning abilities
🔎 Similar Papers
No similar papers found.