gen2seg: Generative Models Enable Generalizable Instance Segmentation

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the implicit acquisition of general-purpose perceptual organization capabilities in generative models and proposes a zero-shot, category-agnostic instance segmentation method requiring no class-level annotations. Methodologically, it uncovers an emergent grouping mechanism intrinsic to pre-trained generative models—such as Stable Diffusion and MAE—during their self-supervised representation learning, and fine-tunes only their encoder-decoder architecture using just two categories (indoor furniture and cars), augmented by a novel instance coloring loss. Crucially, the approach bypasses large-scale manual annotation and internet-scale pre-training. Empirically, it achieves performance on unseen object categories and styles comparable to fully supervised SAM, while significantly outperforming SAM and existing promptable/discriminative models in fine-grained structural parsing and segmentation of ambiguous boundaries. This study provides the first empirical evidence that generative representations can support highly generalizable, category-agnostic instance segmentation.

Technology Category

Application Category

📝 Abstract
By pretraining to synthesize coherent images from perturbed inputs, generative models inherently learn to understand object boundaries and scene compositions. How can we repurpose these generative representations for general-purpose perceptual organization? We finetune Stable Diffusion and MAE (encoder+decoder) for category-agnostic instance segmentation using our instance coloring loss exclusively on a narrow set of object types (indoor furnishings and cars). Surprisingly, our models exhibit strong zero-shot generalization, accurately segmenting objects of types and styles unseen in finetuning (and in many cases, MAE's ImageNet-1K pretraining too). Our best-performing models closely approach the heavily supervised SAM when evaluated on unseen object types and styles, and outperform it when segmenting fine structures and ambiguous boundaries. In contrast, existing promptable segmentation architectures or discriminatively pretrained models fail to generalize. This suggests that generative models learn an inherent grouping mechanism that transfers across categories and domains, even without internet-scale pretraining. Code, pretrained models, and demos are available on our website.
Problem

Research questions and friction points this paper is trying to address.

Repurpose generative models for general instance segmentation
Achieve zero-shot generalization on unseen object types
Outperform supervised models in segmenting fine structures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Repurposing generative models for instance segmentation
Finetuning Stable Diffusion and MAE with instance coloring loss
Achieving zero-shot generalization across unseen object types
🔎 Similar Papers
No similar papers found.