The Limits of Learning from Pictures and Text: Vision-Language Models and Embodied Scene Understanding

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether vision-language models trained solely on image–text co-occurrence can achieve human-level scene understanding, particularly in affordance perception—a dimension inherently tied to embodied experience. Through 15 high-level tasks, the authors compare the performance of 18 models against over 2,000 human participants and introduce the Human-Calibrated Cosine Distance (HCD) metric to quantify the similarity between model outputs and human response distributions in tasks lacking ground-truth answers. Integrating large-scale model evaluation, human behavioral experiments, corpus analysis, and six mechanistic hypothesis tests, the research reveals that while models approach human performance on commonsense tasks, they exhibit persistent structural deficits in affordance understanding—deficits not readily mitigated by prompt engineering or iterative model scaling.

Technology Category

Application Category

📝 Abstract
What information is sufficient to learn the full richness of human scene understanding? The distributional hypothesis holds that the statistical co-occurrence of language and images captures the conceptual knowledge underlying visual cognition. Vision-language models (VLMs) are trained on massive paired text-image corpora but lack embodied experience, making them an ideal test of the distributional hypothesis. We report two experiments comparing descriptions generated by 18 VLMs to those of over 2000 human observers across 15 high-level scene understanding tasks, spanning general knowledge, affordances, sensory experiences, affective responses, and future prediction. Because many tasks lack ground truth answers, we developed a Human-Calibrated Cosine Distance (HCD) metric that measures VLM output similarity to the distribution of human responses, scaled by within-human variability. In Experiment 1, VLMs approached human-level performance on general knowledge tasks, but showed a robust deficit for affordance tasks that resisted prompt engineering and did not improve with newer model releases. In Experiment 2, we tested six mechanistic hypotheses for explaining this affordance gap, finding that the deficit was structural rather than stylistic and was not resolved by providing explicit spatial information. Corpus analyses revealed that image captioning datasets contain sparse agent-addressed affordance language, consistent with Gricean accounts of why embodied knowledge may be systematically underrepresented in language. Together, these findings suggest that distributional learning from images and text is insufficient for affordance-based scene understanding, implying that some dimensions of human visual cognition may require the kind of agent-centered, three-dimensional experience that no photograph or caption can encode.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
scene understanding
affordances
embodied cognition
distributional learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language Models
Affordance
Human-Calibrated Cosine Distance
Embodied Cognition
Scene Understanding
🔎 Similar Papers
No similar papers found.
G
Gillian Rosenberg
Barnard College, Columbia University
S
Skylar Stadhard
Barnard College, Columbia University
B
Bruce C. Hansen
Colgate University
Michelle R. Greene
Michelle R. Greene
Assistant Professor of Psychology, Barnard College, Columbia University
Vision sciencecognitive psychologycognitive sciencescene perception