🤖 AI Summary
This study investigates the compositional understanding and generalization capabilities of vision models, addressing the central question: *Can scaling data volume alone improve compositional generalization?* Through controlled experiments, we systematically evaluate the effects of data scale, concept diversity, and compositional coverage. Results demonstrate that compositional coverage—not total data volume—drives the emergence of linearly decomposable representations, enabling zero-shot generalization to unseen compositions from few-shot compositional examples. Empirical analysis using pretrained models (e.g., DINO, CLIP) confirms that current models possess rudimentary compositional generalization capacity, yet their performance is fundamentally limited by insufficient concept diversity in training data. Crucially, this work establishes, for the first time, the causal pathway: *compositional coverage → linearly decomposable representations → strong compositional generalization*. Our findings provide both theoretical grounding and practical guidance for curating high-diversity datasets and designing efficient compositional learning methods.
📝 Abstract
Compositional understanding is crucial for human intelligence, yet it remains unclear whether contemporary vision models exhibit it. The dominant machine learning paradigm is built on the premise that scaling data and model sizes will improve out-of-distribution performance, including compositional generalization. We test this premise through controlled experiments that systematically vary data scale, concept diversity, and combination coverage. We find that compositional generalization is driven by data diversity, not mere data scale. Increased combinatorial coverage forces models to discover a linearly factored representational structure, where concepts decompose into additive components. We prove this structure is key to efficiency, enabling perfect generalization from few observed combinations. Evaluating pretrained models (DINO, CLIP), we find above-random yet imperfect performance, suggesting partial presence of this structure. Our work motivates stronger emphasis on constructing diverse datasets for compositional generalization, and considering the importance of representational structure that enables efficient compositional learning. Code available at https://github.com/oshapio/visual-compositional-generalization.