SynthSeg-Agents: Multi-Agent Synthetic Data Generation for Zero-Shot Weakly Supervised Semantic Segmentation

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Zero-shot weakly supervised semantic segmentation (ZSWSSS) suffers from the absence of ground-truth annotated images, hindering model training. Method: This paper introduces the first fully synthetic-data-driven paradigm for ZSWSSS, leveraging a multi-agent framework powered by large language models (LLMs) and vision-language models (VLMs). Two collaborative agents—prompt optimizer and image generator—perform self-iterative refinement to produce end-to-end synthetic training data. The method integrates CLIP-based similarity evaluation and diversity filtering, ViT-based pseudo-labeling, and memory-augmented prompt-space exploration—entirely without real training images. Contribution/Results: Evaluated on PASCAL VOC 2012 and COCO 2014, segmentation models trained solely on our synthetic data achieve performance comparable to fully supervised baselines. This work establishes, for the first time, the feasibility and effectiveness of zero-shot semantic segmentation training using exclusively synthetic data.

Technology Category

Application Category

📝 Abstract
Weakly Supervised Semantic Segmentation (WSSS) with image level labels aims to produce pixel level predictions without requiring dense annotations. While recent approaches have leveraged generative models to augment existing data, they remain dependent on real world training samples. In this paper, we introduce a novel direction, Zero Shot Weakly Supervised Semantic Segmentation (ZSWSSS), and propose SynthSeg Agents, a multi agent framework driven by Large Language Models (LLMs) to generate synthetic training data entirely without real images. SynthSeg Agents comprises two key modules, a Self Refine Prompt Agent and an Image Generation Agent. The Self Refine Prompt Agent autonomously crafts diverse and semantically rich image prompts via iterative refinement, memory mechanisms, and prompt space exploration, guided by CLIP based similarity and nearest neighbor diversity filtering. These prompts are then passed to the Image Generation Agent, which leverages Vision Language Models (VLMs) to synthesize candidate images. A frozen CLIP scoring model is employed to select high quality samples, and a ViT based classifier is further trained to relabel the entire synthetic dataset with improved semantic precision. Our framework produces high quality training data without any real image supervision. Experiments on PASCAL VOC 2012 and COCO 2014 show that SynthSeg Agents achieves competitive performance without using real training images. This highlights the potential of LLM driven agents in enabling cost efficient and scalable semantic segmentation.
Problem

Research questions and friction points this paper is trying to address.

Generates synthetic training data without real images for segmentation
Enables zero-shot weakly supervised semantic segmentation via multi-agent LLMs
Improves semantic precision in segmentation using autonomous prompt refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent LLM framework generates synthetic training data
Self-refining prompts create diverse images without real samples
CLIP scoring and ViT classifier enhance semantic precision
🔎 Similar Papers
No similar papers found.