🤖 AI Summary
This study investigates whether vision-language models trained solely on image–text co-occurrence can achieve human-level scene understanding, particularly in affordance perception—a dimension inherently tied to embodied experience. Through 15 high-level tasks, the authors compare the performance of 18 models against over 2,000 human participants and introduce the Human-Calibrated Cosine Distance (HCD) metric to quantify the similarity between model outputs and human response distributions in tasks lacking ground-truth answers. Integrating large-scale model evaluation, human behavioral experiments, corpus analysis, and six mechanistic hypothesis tests, the research reveals that while models approach human performance on commonsense tasks, they exhibit persistent structural deficits in affordance understanding—deficits not readily mitigated by prompt engineering or iterative model scaling.
📝 Abstract
What information is sufficient to learn the full richness of human scene understanding? The distributional hypothesis holds that the statistical co-occurrence of language and images captures the conceptual knowledge underlying visual cognition. Vision-language models (VLMs) are trained on massive paired text-image corpora but lack embodied experience, making them an ideal test of the distributional hypothesis. We report two experiments comparing descriptions generated by 18 VLMs to those of over 2000 human observers across 15 high-level scene understanding tasks, spanning general knowledge, affordances, sensory experiences, affective responses, and future prediction. Because many tasks lack ground truth answers, we developed a Human-Calibrated Cosine Distance (HCD) metric that measures VLM output similarity to the distribution of human responses, scaled by within-human variability. In Experiment 1, VLMs approached human-level performance on general knowledge tasks, but showed a robust deficit for affordance tasks that resisted prompt engineering and did not improve with newer model releases. In Experiment 2, we tested six mechanistic hypotheses for explaining this affordance gap, finding that the deficit was structural rather than stylistic and was not resolved by providing explicit spatial information. Corpus analyses revealed that image captioning datasets contain sparse agent-addressed affordance language, consistent with Gricean accounts of why embodied knowledge may be systematically underrepresented in language. Together, these findings suggest that distributional learning from images and text is insufficient for affordance-based scene understanding, implying that some dimensions of human visual cognition may require the kind of agent-centered, three-dimensional experience that no photograph or caption can encode.