BabyVLM-V2: Toward Developmentally Grounded Pretraining and Benchmarking of Vision Foundation Models

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the low sample efficiency and weak cognitive interpretability of vision foundation models during pretraining. Methodologically, it introduces a developmentally inspired pretraining paradigm: (1) constructing the first longitudinal infant audiovisual corpus; (2) designing a lightweight vision-language architecture with multi-granularity alignment across video, image, and dialogue modalities; and (3) developing DevCV—a novel multimodal cognitive benchmark adapted from the NIH Baby Toolbox, comprising ten developmentally sensitive tasks. The core contribution is the “developmental alignment” pretraining mechanism, which systematically embeds principles from developmental psychology into data curation, task design, and evaluation. Experiments demonstrate that a zero-initialized lightweight model achieves performance on multiple DevCV tasks comparable to—or exceeding—that of GPT-4o, while substantially improving sample efficiency and cognitive plausibility.

Technology Category

Application Category

📝 Abstract
Early children's developmental trajectories set up a natural goal for sample-efficient pretraining of vision foundation models. We introduce BabyVLM-V2, a developmentally grounded framework for infant-inspired vision-language modeling that extensively improves upon BabyVLM-V1 through a longitudinal, multifaceted pretraining set, a versatile model, and, most importantly, DevCV Toolbox for cognitive evaluation. The pretraining set maximizes coverage while minimizing curation of a longitudinal, infant-centric audiovisual corpus, yielding video-utterance, image-utterance, and multi-turn conversational data that mirror infant experiences. DevCV Toolbox adapts all vision-related measures of the recently released NIH Baby Toolbox into a benchmark suite of ten multimodal tasks, covering spatial reasoning, memory, and vocabulary understanding aligned with early children's capabilities. Experimental results show that a compact model pretrained from scratch can achieve competitive performance on DevCV Toolbox, outperforming GPT-4o on some tasks. We hope the principled, unified BabyVLM-V2 framework will accelerate research in developmentally plausible pretraining of vision foundation models.
Problem

Research questions and friction points this paper is trying to address.

Develops a developmentally grounded vision-language model framework
Creates a cognitive evaluation benchmark for early children's capabilities
Enables sample-efficient pretraining of compact vision foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developmentally grounded infant-inspired vision-language modeling framework
Longitudinal multifaceted pretraining set mirroring infant experiences
DevCV Toolbox benchmark suite for cognitive evaluation tasks
🔎 Similar Papers
No similar papers found.