Compositional Scene Understanding through Inverse Generative Modeling

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing generative models lack robust scene understanding—particularly in handling variable object counts, diverse shapes, and distribution shifts relative to training data. Method: This work frames visual understanding as inverse inference over compositional generative models. We propose the first compositional inverse generation framework: it performs unsupervised inversion via energy-based modeling, modularly assembles generative components, and enables zero-shot adaptation of pretrained text-to-image models (e.g., Stable Diffusion) without fine-tuning. Contribution/Results: The method achieves multi-object perception without parameter updates, demonstrating strong generalization to unseen object counts, geometric configurations, and scene distributions. It accurately decomposes scenes into constituent objects and global scene factors—even in novel environments—thereby significantly improving robustness in multi-object recognition and enhancing structural interpretability of generated explanations.

Technology Category

Application Category

📝 Abstract
Generative models have demonstrated remarkable abilities in generating high-fidelity visual content. In this work, we explore how generative models can further be used not only to synthesize visual content but also to understand the properties of a scene given a natural image. We formulate scene understanding as an inverse generative modeling problem, where we seek to find conditional parameters of a visual generative model to best fit a given natural image. To enable this procedure to infer scene structure from images substantially different than those seen during training, we further propose to build this visual generative model compositionally from smaller models over pieces of a scene. We illustrate how this procedure enables us to infer the set of objects in a scene, enabling robust generalization to new test scenes with an increased number of objects of new shapes. We further illustrate how this enables us to infer global scene factors, likewise enabling robust generalization to new scenes. Finally, we illustrate how this approach can be directly applied to existing pretrained text-to-image generative models for zero-shot multi-object perception. Code and visualizations are at https://energy-based-model.github.io/compositional-inference.
Problem

Research questions and friction points this paper is trying to address.

Inverse generative modeling for scene understanding
Compositional inference of objects in scenes
Zero-shot multi-object perception using pretrained models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inverse generative modeling for scene understanding
Compositional visual generative model building
Zero-shot multi-object perception via pretrained models
🔎 Similar Papers
No similar papers found.