Rethinking Visual Intelligence: Insights from Video Pretraining

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual foundation models lag significantly behind language models in compositional reasoning, sample efficiency, and general problem-solving—revealing a critical bottleneck in visual representation learning. To address this, we propose leveraging video diffusion models (VDMs) as a structural backbone, pretrained on large-scale spatiotemporal video data to imbue representations with strong structural and dynamic inductive biases, thereby enhancing generalization. Methodologically, we employ lightweight adapters for efficient cross-task transfer, circumventing full-model fine-tuning. Extensive evaluation on rigorous reasoning benchmarks—including ARC-AGI, ConceptARC, visual games, path planning, and cellular automata—demonstrates that VDMs achieve superior generalization over LLMs with substantially fewer samples, particularly excelling in compositional reasoning and zero-/few-shot adaptation. This work establishes a novel paradigm for developing visual foundation models with language-model-like adaptability and robust out-of-distribution generalization.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated that large-scale pretraining enables systems to adapt rapidly to new problems with little supervision in the language domain. This success, however, has not translated as effectively to the visual domain, where models, including LLMs, continue to struggle with compositional understanding, sample efficiency, and general-purpose problem-solving. We investigate Video Diffusion Models (VDMs) as a promising direction for bridging this gap. Pretraining on spatiotemporal data endows these models with strong inductive biases for structure and dynamics, which we hypothesize can support broad task adaptability. To test this, we design a controlled evaluation in which both a pretrained LLM and a pretrained VDM are equipped with lightweight adapters and presented with tasks in their natural modalities. Across benchmarks including ARC-AGI, ConceptARC, visual games, route planning, and cellular automata, VDMs demonstrate higher data efficiency than their language counterparts. Taken together, our results indicate that video pretraining offers inductive biases that support progress toward visual foundation models.
Problem

Research questions and friction points this paper is trying to address.

Bridging the gap between language and visual domain adaptation capabilities
Addressing compositional understanding and sample efficiency in vision models
Developing visual foundation models with broad task adaptability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video Diffusion Models pretrained on spatiotemporal data
Lightweight adapters enable efficient task adaptation
Inductive biases from video pretraining enhance visual reasoning
🔎 Similar Papers
No similar papers found.
P
Pablo Acuaviva
Computer Vision Group, University of Bern, Bern, Switzerland
A
Aram Davtyan
Computer Vision Group, University of Bern, Bern, Switzerland
M
Mariam Hassan
VITA Lab, EPFL, Lausanne, Switzerland
S
Sebastian Stapf
Computer Vision Group, University of Bern, Bern, Switzerland
Ahmad Rahimi
Ahmad Rahimi
PhD student at VITA, EPFL
Computer VisionMachine LearningArtificial IntelligenceSelf-driving Cars
Alexandre Alahi
Alexandre Alahi
Professor, EPFL
Computer VisionTransportationAutonomous drivingIntelligent Transportation SystemsAI
Paolo Favaro
Paolo Favaro
Professor of Computer Vision, University of Bern
computer visionmachine learningcomputational photographyinverse problemsoptimization methods