🤖 AI Summary
This work investigates whether infant-scale visual learning supports fine-grained attribute recognition—such as color, size, and texture—beyond category-level identification, which prior infant-inspired models (e.g., CVCL) have only demonstrated. To this end, we introduce the first controllable, infant-first-person benchmark with systematic attribute variations, enabling rigorous cross-modal evaluation of attribute understanding in CVCL and CLIP. Our methodology integrates infant-video-driven contrastive learning (CVCL), CLIP’s zero-shot transfer, controllable attribute synthesis, and embedding-space analysis. Results show CVCL significantly outperforms CLIP on size discrimination, while CLIP excels at color recognition; both models fail to achieve reliable vision–language alignment for texture—achieving accuracy far below human infants—revealing a critical cognitive gap in texture representation and linguistic grounding. This study is the first to systematically assess infant-scale learning’s capacity for non-categorical attribute discrimination, establishing a novel benchmark and yielding foundational cognitive insights into multimodal alignment limitations.
📝 Abstract
Infants learn to recognize not only object categories but also fine grained attributes such as color, size, and texture within their first two years of life. Prior work explores Childs View for Contrastive Learning (CVCL), a CLIP style model trained on infant egocentric video as a computational model of early infant learning, but it focuses only on class level recognition. This leaves it unclear whether infant scale learning also supports attribute discrimination. To address this, we introduce a benchmark that systematically varies color, size, and texture, allowing controlled tests of within class attribute recognition. Comparing CVCL with CLIP shows clear differences. CVCL is better at size discrimination, while CLIP achieves higher accuracy on color discrimination. Both models represent texture in image embeddings but fail to ground texture linguistically, suggesting a gap between visual and language spaces.