🤖 AI Summary
This study investigates how object-centric (OC) representations enhance compositional generalization and structured reasoning in visual question answering (VQA), and analyzes their complementarity with large vision-language foundation models (e.g., ViT, CLIP). We introduce the first large-scale empirical framework, evaluating over 600 downstream VQA models across 15 upstream representation types—including OC models (Slot Attention, IODINE)—and incorporating multi-stage fine-tuning and prompting strategies. Our key contributions are: (1) the first empirical validation that OC representations substantially improve compositional generalization on both synthetic (CLEVR) and real-world (GQA) benchmarks; (2) a hybrid paradigm integrating OC representations with foundation models; and (3) experimental results demonstrating an average accuracy gain of 3.2% and a 21% improvement in robustness, revealing a synergistic division of labor—OC representations excel at structured, part-based reasoning, while foundation models support open-domain semantic understanding.
📝 Abstract
Object-centric (OC) representations, which model visual scenes as compositions of discrete objects, have the potential to be used in various downstream tasks to achieve systematic compositional generalization and facilitate reasoning. However, these claims have yet to be thoroughly validated empirically. Recently, foundation models have demonstrated unparalleled capabilities across diverse domains, from language to computer vision, positioning them as a potential cornerstone of future research for a wide range of computational tasks. In this paper, we conduct an extensive empirical study on representation learning for downstream Visual Question Answering (VQA), which requires an accurate compositional understanding of the scene. We thoroughly investigate the benefits and trade-offs of OC models and alternative approaches including large pre-trained foundation models on both synthetic and real-world data, ultimately identifying a promising path to leverage the strengths of both paradigms. The extensiveness of our study, encompassing over 600 downstream VQA models and 15 different types of upstream representations, also provides several additional insights that we believe will be of interest to the community at large.