Language and Experience: A Computational Model of Social Learning in Complex Tasks

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of enhancing safe and efficient learning for humans and AI agents in novel environments by integrating linguistic guidance from others with individual direct experience. We propose a Bayesian inference–based social learning computational framework that transforms pre-trained language models into calibrated, conditional probabilistic models for generating and interpreting advice—enabling cross-generational knowledge accumulation and bidirectional human-AI knowledge transfer. Methodologically, the framework jointly models structured world representations, multimodal sensorimotor data, and natural language inputs via coupled probabilistic inference and iterative learning. Evaluated across ten video game environments, our approach accelerates policy learning by 2.3× on average, reduces high-risk interactions by 41%, and empirically validates both effective distillation of human instructions into AI policies and interpretable reverse feedback from AI strategies to human users.

Technology Category

Application Category

📝 Abstract
The ability to combine linguistic guidance from others with direct experience is central to human development, enabling safe and rapid learning in new environments. How do people integrate these two sources of knowledge, and how might AI systems? We present a computational framework that models social learning as joint probabilistic inference over structured, executable world models given sensorimotor and linguistic data. We make this possible by turning a pretrained language model into a probabilistic model of how humans share advice conditioned on their beliefs, allowing our agents both to generate advice for others and to interpret linguistic input as evidence during Bayesian inference. Using behavioral experiments and simulations across 10 video games, we show how linguistic guidance can shape exploration and accelerate learning by reducing risky interactions and speeding up key discoveries in both humans and models. We further explore how knowledge can accumulate across generations through iterated learning experiments and demonstrate successful knowledge transfer between humans and models -- revealing how structured, language-compatible representations might enable human-machine collaborative learning.
Problem

Research questions and friction points this paper is trying to address.

Modeling integration of language and experience in learning
Developing AI systems for social learning from advice
Enabling human-machine collaborative learning through language
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probabilistic inference model integrating sensorimotor and linguistic data
Pretrained language model converted to belief-conditional advice generator
Structured executable world models enabling human-machine knowledge transfer
🔎 Similar Papers
No similar papers found.