Learning Through Little Eyes: Attribute Discrimination Beyond Objects

📅 2025-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether infant-scale visual learning supports fine-grained attribute recognition—such as color, size, and texture—beyond category-level identification, which prior infant-inspired models (e.g., CVCL) have only demonstrated. To this end, we introduce the first controllable, infant-first-person benchmark with systematic attribute variations, enabling rigorous cross-modal evaluation of attribute understanding in CVCL and CLIP. Our methodology integrates infant-video-driven contrastive learning (CVCL), CLIP’s zero-shot transfer, controllable attribute synthesis, and embedding-space analysis. Results show CVCL significantly outperforms CLIP on size discrimination, while CLIP excels at color recognition; both models fail to achieve reliable vision–language alignment for texture—achieving accuracy far below human infants—revealing a critical cognitive gap in texture representation and linguistic grounding. This study is the first to systematically assess infant-scale learning’s capacity for non-categorical attribute discrimination, establishing a novel benchmark and yielding foundational cognitive insights into multimodal alignment limitations.

Technology Category

Application Category

📝 Abstract
Infants learn to recognize not only object categories but also fine grained attributes such as color, size, and texture within their first two years of life. Prior work explores Childs View for Contrastive Learning (CVCL), a CLIP style model trained on infant egocentric video as a computational model of early infant learning, but it focuses only on class level recognition. This leaves it unclear whether infant scale learning also supports attribute discrimination. To address this, we introduce a benchmark that systematically varies color, size, and texture, allowing controlled tests of within class attribute recognition. Comparing CVCL with CLIP shows clear differences. CVCL is better at size discrimination, while CLIP achieves higher accuracy on color discrimination. Both models represent texture in image embeddings but fail to ground texture linguistically, suggesting a gap between visual and language spaces.
Problem

Research questions and friction points this paper is trying to address.

Investigates infant attribute discrimination beyond object categories
Compares CVCL and CLIP models on color, size, texture recognition
Examines gaps between visual and linguistic grounding of attributes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark for controlled attribute recognition tests
CVCL excels at size discrimination over CLIP
Models represent texture visually but not linguistically
🔎 Similar Papers
No similar papers found.