Cross-Modal Taxonomic Generalization in (Vision-) Language Models

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether frozen pretrained language models can generalize their semantic knowledge to visual objects solely through cross-modal alignment, without explicit supervision from hypernym labels. The approach fixes both language and vision encoders and learns only an intermediate alignment mapping, while systematically removing hypernym training signals via counterfactual image–label experiments. The work presents the first evidence that cross-modal classification generalization remains achievable even in the complete absence of hypernym supervision, revealing that this capability arises from the interplay between intra-class visual similarity and linguistic priors. Further fine-grained analysis demonstrates that the underlying visual structure critically influences the model’s generalization performance.

Technology Category

Application Category

📝 Abstract
What is the interplay between semantic representations learned by language models (LM) from surface form alone to those learned from more grounded evidence? We study this question for a scenario where part of the input comes from a different modality -- in our case, in a vision-language model (VLM), where a pretrained LM is aligned with a pretrained image encoder. As a case study, we focus on the task of predicting hypernyms of objects represented in images. We do so in a VLM setup where the image encoder and LM are kept frozen, and only the intermediate mappings are learned. We progressively deprive the VLM of explicit evidence for hypernyms, and test whether knowledge of hypernyms is recoverable from the LM. We find that the LMs we study can recover this knowledge and generalize even in the most extreme version of this experiment (when the model receives no evidence of a hypernym during training). Additional experiments suggest that this cross-modal taxonomic generalization persists under counterfactual image-label mappings only when the counterfactual data have high visual similarity within each category. Taken together, these findings suggest that cross-modal generalization in LMs arises as a result of both coherence in the extralinguistic input and knowledge derived from language cues.
Problem

Research questions and friction points this paper is trying to address.

cross-modal generalization
vision-language models
hypernym prediction
taxonomic knowledge
semantic representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-modal generalization
vision-language models
taxonomic reasoning
hypernym prediction
frozen pretraining
🔎 Similar Papers
No similar papers found.