🤖 AI Summary
This study addresses the low sample efficiency and weak cognitive interpretability of vision foundation models during pretraining. Methodologically, it introduces a developmentally inspired pretraining paradigm: (1) constructing the first longitudinal infant audiovisual corpus; (2) designing a lightweight vision-language architecture with multi-granularity alignment across video, image, and dialogue modalities; and (3) developing DevCV—a novel multimodal cognitive benchmark adapted from the NIH Baby Toolbox, comprising ten developmentally sensitive tasks. The core contribution is the “developmental alignment” pretraining mechanism, which systematically embeds principles from developmental psychology into data curation, task design, and evaluation. Experiments demonstrate that a zero-initialized lightweight model achieves performance on multiple DevCV tasks comparable to—or exceeding—that of GPT-4o, while substantially improving sample efficiency and cognitive plausibility.
📝 Abstract
Early children's developmental trajectories set up a natural goal for sample-efficient pretraining of vision foundation models. We introduce BabyVLM-V2, a developmentally grounded framework for infant-inspired vision-language modeling that extensively improves upon BabyVLM-V1 through a longitudinal, multifaceted pretraining set, a versatile model, and, most importantly, DevCV Toolbox for cognitive evaluation. The pretraining set maximizes coverage while minimizing curation of a longitudinal, infant-centric audiovisual corpus, yielding video-utterance, image-utterance, and multi-turn conversational data that mirror infant experiences. DevCV Toolbox adapts all vision-related measures of the recently released NIH Baby Toolbox into a benchmark suite of ten multimodal tasks, covering spatial reasoning, memory, and vocabulary understanding aligned with early children's capabilities. Experimental results show that a compact model pretrained from scratch can achieve competitive performance on DevCV Toolbox, outperforming GPT-4o on some tasks. We hope the principled, unified BabyVLM-V2 framework will accelerate research in developmentally plausible pretraining of vision foundation models.