Evaluating Compositional Generalisation in VLMs and Diffusion Models

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically evaluates the compositional generalization capabilities of vision-language models (VLMs) and diffusion models, focusing on object–attribute/relation binding—a core challenge in zero-shot (ZSL) and generalized zero-shot (GZSL) learning. We conduct the first comparative analysis of three model families—Diffusion Classifiers, CLIP, and ViLT—examining the semantic compositionality of their embedding spaces. Results show that Diffusion Classifiers and ViLT excel at attribute-binding tasks; however, all models exhibit significant performance degradation on relation-based GZSL (e.g., “left/right”), revealing a fundamental bottleneck in relational concept representation and disambiguation. Our findings expose critical limitations of current multimodal models in structured semantic reasoning. Crucially, we establish Diffusion Classifiers as a novel paradigm for compositional discriminative modeling, offering improved representational fidelity for bound semantic structures.

Technology Category

Application Category

📝 Abstract
A fundamental aspect of the semantics of natural language is that novel meanings can be formed from the composition of previously known parts. Vision-language models (VLMs) have made significant progress in recent years, however, there is evidence that they are unable to perform this kind of composition. For example, given an image of a red cube and a blue cylinder, a VLM such as CLIP is likely to incorrectly label the image as a red cylinder or a blue cube, indicating it represents the image as a `bag-of-words' and fails to capture compositional semantics. Diffusion models have recently gained significant attention for their impressive generative abilities, and zero-shot classifiers based on diffusion models have been shown to perform competitively with CLIP in certain compositional tasks. In this work we explore whether the generative Diffusion Classifier has improved compositional generalisation abilities compared to discriminative models. We assess three models -- Diffusion Classifier, CLIP, and ViLT -- on their ability to bind objects with attributes and relations in both zero-shot learning (ZSL) and generalised zero-shot learning (GZSL) settings. Our results show that the Diffusion Classifier and ViLT perform well at concept binding tasks, but that all models struggle significantly with the relational GZSL task, underscoring the broader challenges VLMs face with relational reasoning. Analysis of CLIP embeddings suggests that the difficulty may stem from overly similar representations of relational concepts such as left and right. Code and dataset are available at: https://github.com/otmive/diffusion_classifier_clip
Problem

Research questions and friction points this paper is trying to address.

Evaluating compositional generalization in vision-language models
Assessing diffusion models' ability to bind attributes and relations
Testing zero-shot learning performance on relational reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates compositional generalization in vision-language models
Compares diffusion classifier with discriminative models CLIP
Assesses attribute binding and relational reasoning capabilities
🔎 Similar Papers
No similar papers found.