🤖 AI Summary
This study investigates the temporal alignment between visual and linguistic experiences in infants’ daily life—specifically, the synchrony between object appearance and the auditory presentation of its corresponding word—a critical signal for early language acquisition. Method: To overcome the high cost and scalability limitations of manual annotation, we propose an automated alignment assessment method based on the CLIP model, validated through human perceptual judgments. Contribution/Results: Applying this method at scale to a first-person infant home-video corpus, we find that high-precision visual–linguistic alignment is extremely sparse in naturalistic settings—substantially lower than in standard machine learning datasets—and exhibits significant inter-infant and intra-infant variability across contexts. These findings empirically reveal the low-density and high-heterogeneity nature of natural language learning signals, providing foundational evidence and a novel methodology for developing ecologically valid theories and computational models of multimodal learning in early childhood.
📝 Abstract
Figuring out which objects or concepts words refer to is a central language learning challenge for young children. Most models of this process posit that children learn early object labels from co-occurrences of words and their referents that occur when someone around them talks about an object in the immediate physical environment. But how aligned in time are children's visual and linguistic experiences during everyday learning? To date, answers to this question have been limited by the need for labor-intensive manual annotations of vision-language co-occurrences. Here, we evaluate the use of contrastive language-image pretraining (CLIP) models to automatically characterize vision-language alignment in egocentric videos taken from the infant perspective in home environments. After validating CLIP alignment scores using human alignment judgments, we apply this metric to a large corpus of infant-perspective videos. We show that idealized aligned moments for learning (e.g., "look at the ball" with a ball present in the child's view) are relatively rare in children's everyday experiences compared to modern machine learning datasets, and highlight variability in alignment both within and across children. These findings suggest that infrequent alignment is a constraint for models describing early word learning and offer a new method for investigating children's multimodal environment.