From Infants to AI: Incorporating Infant-like Learning in Models Boosts Efficiency and Generalization in Learning Social Prediction Tasks

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Human infants rapidly acquire foundational social concepts—such as animacy and goal attribution—via unsupervised or few-shot learning, enabling robust future event prediction; in contrast, current AI models rely heavily on large-scale labeled datasets and exhibit poor generalization. Method: Inspired by developmental psychology, we formalize the evolutionary mechanism of conceptual hierarchies and propose a multi-stage neural framework integrating causal modeling, self-supervised representation learning, and concept-disentanglement regularization—where innate social concepts guide subsequent representation learning. Contribution/Results: Our approach substantially narrows the gap between AI and human-like conceptual structure: it improves accuracy by 12.7% on social prediction tasks, reduces data requirements by 65%, and yields representations that demonstrate significantly superior cross-task and cross-scenario generalization compared to baseline models.

Technology Category

Application Category

📝 Abstract
Early in development, infants learn a range of useful concepts, which can be challenging from a computational standpoint. This early learning comes together with an initial understanding of aspects of the meaning of concepts, e.g., their implications, causality, and using them to predict likely future events. All this is accomplished in many cases with little or no supervision, and from relatively few examples, compared with current network models. In learning about objects and human-object interactions, early acquired and possibly innate concepts are often used in the process of learning additional, more complex concepts. In the current work, we model how early-acquired concepts are used in the learning of subsequent concepts, and compare the results with standard deep network modeling. We focused in particular on the use of the concepts of animacy and goal attribution in learning to predict future events. We show that the use of early concepts in the learning of new concepts leads to better learning (higher accuracy) and more efficient learning (requiring less data). We further show that this integration of early and new concepts shapes the representation of the concepts acquired by the model. The results show that when the concepts were learned in a human-like manner, the emerging representation was more useful, as measured in terms of generalization to novel data and tasks. On a more general level, the results suggest that there are likely to be basic differences in the conceptual structures acquired by current network models compared to human learning.
Problem

Research questions and friction points this paper is trying to address.

Incorporating infant-like learning improves AI model efficiency and generalization.
Early-acquired concepts enhance learning accuracy and reduce data requirements.
Human-like concept learning leads to better generalization in AI models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates infant-like learning in AI models
Uses early-acquired concepts for better generalization
Enhances learning efficiency with fewer data examples
🔎 Similar Papers
No similar papers found.
S
Shify Treger
Department of Computer Science and Applied Mathematics, Weizmann Institute of Science
Shimon Ullman
Shimon Ullman
Professor of Computer Science Weizmann Institute of Science
Computer vision human vision brain modeling