🤖 AI Summary
This study systematically evaluates the compositional generalization capability of diffusion-based classifiers on discriminative tasks and their sensitivity to target-domain distribution shifts. To address confounding effects between model capacity and dataset bias, we propose Self-Bench—a self-generated diagnostic benchmark that explicitly decouples intrinsic model competence from domain-specific artifacts. We conduct the first comprehensive zero-shot evaluation of Stable Diffusion (SD) 1.5, 2.0, and 3-m across 10 diverse datasets and over 30 compositional reasoning tasks. Our analysis reveals that timestep-weighting strategies critically influence classification performance, with sensitivity escalating as domain divergence increases—SD3-m, while achieving the highest overall accuracy, exhibits the greatest sensitivity to such shifts. Results indicate that diffusion classifiers possess nontrivial compositional reasoning ability, yet their performance is highly contingent upon alignment between the generative domain and the target task domain. All code, data, and the Self-Bench benchmark are publicly released.
📝 Abstract
Understanding visual scenes is fundamental to human intelligence. While discriminative models have significantly advanced computer vision, they often struggle with compositional understanding. In contrast, recent generative text-to-image diffusion models excel at synthesizing complex scenes, suggesting inherent compositional capabilities. Building on this, zero-shot diffusion classifiers have been proposed to repurpose diffusion models for discriminative tasks. While prior work offered promising results in discriminative compositional scenarios, these results remain preliminary due to a small number of benchmarks and a relatively shallow analysis of conditions under which the models succeed. To address this, we present a comprehensive study of the discriminative capabilities of diffusion classifiers on a wide range of compositional tasks. Specifically, our study covers three diffusion models (SD 1.5, 2.0, and, for the first time, 3-m) spanning 10 datasets and over 30 tasks. Further, we shed light on the role that target dataset domains play in respective performance; to isolate the domain effects, we introduce a new diagnostic benchmark Self-Bench comprised of images created by diffusion models themselves. Finally, we explore the importance of timestep weighting and uncover a relationship between domain gap and timestep sensitivity, particularly for SD3-m. To sum up, diffusion classifiers understand compositionality, but conditions apply! Code and dataset are available at https://github.com/eugene6923/Diffusion-Classifiers-Compositionality.