🤖 AI Summary
This study investigates whether vision-language (VL) joint training restructures the lexical conceptual knowledge organization—particularly taxonomic knowledge—in language models. Method: Using a minimal-pair experimental design, we combine behavioral evaluation (taxonomic vs. non-taxonomic text-based question answering) with representational analyses (internal activation patterns and response consistency) to compare pure-text and VL models. Contribution/Results: We find no significant difference in the *content* of lexical taxonomic knowledge between the two model types; however, VL models demonstrate superior performance on tasks requiring taxonomic reasoning. This improvement stems from enhanced *efficiency in knowledge retrieval and deployment*, not from fundamental alterations to semantic representation structure. Our results reveal that multimodal training primarily optimizes the *dynamics of knowledge access* rather than rewriting core lexical-semantic representations—highlighting a key mechanism underlying cross-modal alignment and knowledge generalization.
📝 Abstract
Does vision-and-language (VL) training change the linguistic representations of language models in meaningful ways? Most results in the literature have shown inconsistent or marginal differences, both behaviorally and representationally. In this work, we start from the hypothesis that the domain in which VL training could have a significant effect is lexical-conceptual knowledge, in particular its taxonomic organization. Through comparing minimal pairs of text-only LMs and their VL-trained counterparts, we first show that the VL models often outperform their text-only counterparts on a text-only question-answering task that requires taxonomic understanding of concepts mentioned in the questions. Using an array of targeted behavioral and representational analyses, we show that the LMs and VLMs do not differ significantly in terms of their taxonomic knowledge itself, but they differ in how they represent questions that contain concepts in a taxonomic relation vs. a non-taxonomic relation. This implies that the taxonomic knowledge itself does not change substantially through additional VL training, but VL training does improve the deployment of this knowledge in the context of a specific task, even when the presentation of the task is purely linguistic.