Published several high-impact papers, such as 'Grounded language acquisition through the eyes and ears of a single child' (Science, 2024) and 'Learning high-level visual representations from a child’s perspective without strong inductive biases' (Nature Machine Intelligence, 2024). These studies have uncovered key cognitive ingredients and inductive biases missing from current models.
Research Experience
Leads a lab focused on few-shot learning of new concepts, learning by generating new goals, learning by asking questions, and learning by producing novel combinations of known components. Technical efforts focus on modern neural network modeling, including meta-learning, fine-tuning LLMs, neuro-symbolic modeling, and learning from child headcam videos.
Education
No detailed educational background information provided.
Background
Associate Professor of Computer Science and Psychology at Princeton University. Research interests include the unique components of human intelligence, using advances in machine intelligence to better understand human intelligence, and leveraging insights from human intelligence to develop more fruitful kinds of machine intelligence.
Miscellany
Teaches courses including Computational Models of Cognition (COS 360 / PSY 360) at Princeton, and has taught multiple courses related to cognitive science at NYU.