🤖 AI Summary
This study investigates whether visual context influences human and large language model (LLM) judgments of sentence acceptability, and systematically compares their processing mechanisms in multimodal settings. Through human behavioral experiments and normalized log-probability analyses of multiple LLMs—including Qwen—with and without visual input, the research provides the first quantitative assessment of how visual information affects acceptability judgments. The findings reveal that visual context has negligible impact on human judgments but significantly reduces the alignment between LLM predictions and human responses. Notably, performance varies across models, with Qwen exhibiting the closest behavior to humans. This work highlights a critical gap between current LLMs and human cognition in multimodal semantic understanding.
📝 Abstract
Previous work has examined the capacity of deep neural networks (DNNs), particularly transformers, to predict human sentence acceptability judgments, both independently of context, and in document contexts. We consider the effect of prior exposure to visual images (i.e., visual context) on these judgments for humans and large language models (LLMs). Our results suggest that, in contrast to textual context, visual images appear to have little if any impact on human acceptability ratings. However, LLMs display the compression effect seen in previous work on human judgments in document contexts. Different sorts of LLMs are able to predict human acceptability judgments to a high degree of accuracy, but in general, their performance is slightly better when visual contexts are removed. Moreover, the distribution of LLM judgments varies among models, with Qwen resembling human patterns, and others diverging from them. LLM-generated predictions on sentence acceptability are highly correlated with their normalised log probabilities in general. However, the correlations decrease when visual contexts are present, suggesting that a higher gap exists between the internal representations of LLMs and their generated predictions in the presence of visual contexts. Our experimental work suggests interesting points of similarity and of difference between human and LLM processing of sentences in multimodal contexts.