🤖 AI Summary
This work addresses the reliance of open-world semantic segmentation on costly fine-grained pixel annotations by proposing a diffusion model–based framework for automatic data generation. Starting from category labels, the method simultaneously synthesizes image–text pairs along with counterfactual negative samples, and leverages open-set detection combined with interactive segmentation to extract pseudo-mask labels for contrastive pretraining. To the best of our knowledge, this is the first approach to integrate counterfactual negative sample generation and self-supervised pseudo-mask extraction, enabling the construction of high-quality pretraining data without any manual annotation. The proposed method achieves state-of-the-art performance in open-world semantic segmentation, attaining mIoU scores of 62.9%, 26.7%, and 40.2% on PASCAL VOC, PASCAL Context, and COCO, respectively.
📝 Abstract
Open-world semantic segmentation presently relies significantly on extensive image-text pair datasets, which often suffer from a lack of fine-grained pixel annotations on sufficient categories. The acquisition of such data is rendered economically prohibitive due to the substantial investments of both human labor and time. In light of the formidable image generation capabilities of diffusion models, we introduce a novel diffusion model-driven pipeline for automatically generating datasets tailored to the needs of open-world semantic segmentation, named "MagicSeg". Our MagicSeg initiates from class labels and proceeds to generate high-fidelity textual descriptions, which in turn serve as guidance for the diffusion model to generate images. Rather than only generating positive samples for each label, our process encompasses the simultaneous generation of corresponding negative images, designed to serve as paired counterfactual samples for contrastive training. Then, to provide a self-supervised signal for open-world segmentation pretraining, our MagicSeg integrates an open-vocabulary detection model and an interactive segmentation model to extract object masks as precise segmentation labels from images based on the provided category labels. By applying our dataset to the contrastive language-image pretraining model with the pseudo mask supervision and the auxiliary counterfactual contrastive training, the downstream model obtains strong performance on open-world semantic segmentation. We evaluate our model on PASCAL VOC, PASCAL Context, and COCO, achieving SOTA with performance of 62.9%, 26.7%, and 40.2%, respectively, demonstrating our dataset's effectiveness in enhancing open-world semantic segmentation capabilities. Project website: https://github.com/ckxhp/magicseg.