Crucial-Diff: A Unified Diffusion Model for Crucial Image and Annotation Synthesis in Data-scarce Scenarios

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In data-scarce domains such as medical imaging and industrial defect detection, existing synthetic data generation methods suffer from sample redundancy, oversimplification, poor discriminability, and limited cross-object generalization. To address these challenges, this paper proposes Crucial-Diff, a unified diffusion-based framework. Its core innovation is the first introduction of Weakness-Aware Sample Mining (WASM), integrated with a scenario-agnostic feature extractor (SAFE), to enable adversarial generation of critical samples guided by downstream model feedback—jointly synthesizing diverse, discriminative, and object-agnostic images and corresponding pixel-level annotations. Crucial-Diff effectively mitigates overfitting and class imbalance, significantly boosting downstream detection and segmentation performance: achieving 83.63% pixel-level AP and 78.12% F1-MAX on MVTec-AD, and 81.64% mIoU and 87.69% mDice on a colon polyp dataset—outperforming all prior synthetic-data approaches.

Technology Category

Application Category

📝 Abstract
The scarcity of data in various scenarios, such as medical, industry and autonomous driving, leads to model overfitting and dataset imbalance, thus hindering effective detection and segmentation performance. Existing studies employ the generative models to synthesize more training samples to mitigate data scarcity. However, these synthetic samples are repetitive or simplistic and fail to provide "crucial information" that targets the downstream model's weaknesses. Additionally, these methods typically require separate training for different objects, leading to computational inefficiencies. To address these issues, we propose Crucial-Diff, a domain-agnostic framework designed to synthesize crucial samples. Our method integrates two key modules. The Scene Agnostic Feature Extractor (SAFE) utilizes a unified feature extractor to capture target information. The Weakness Aware Sample Miner (WASM) generates hard-to-detect samples using feedback from the detection results of downstream model, which is then fused with the output of SAFE module. Together, our Crucial-Diff framework generates diverse, high-quality training data, achieving a pixel-level AP of 83.63% and an F1-MAX of 78.12% on MVTec. On polyp dataset, Crucial-Diff reaches an mIoU of 81.64% and an mDice of 87.69%. Code will be released after acceptance.
Problem

Research questions and friction points this paper is trying to address.

Addressing data scarcity in medical, industrial, and autonomous driving scenarios
Overcoming repetitive and simplistic synthetic training samples
Eliminating separate training for different objects to improve efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified diffusion model synthesizes crucial samples
Scene Agnostic Feature Extractor captures target information
Weakness Aware Sample Miner generates hard-to-detect samples
🔎 Similar Papers
No similar papers found.
S
Siyue Yao
School of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
Mingjie Sun
Mingjie Sun
Thinking Machines Lab
E
Eng Gee Lim
School of Advanced Technology, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
Ran Yi
Ran Yi
Associate Professor, Shanghai Jiao Tong University
Computer VisionComputer Graphics
B
Baojiang Zhong
School of Computer Science and Technology, Soochow University, Suzhou 215006, China
Moncef Gabbouj
Moncef Gabbouj
Professor, Tampere University
Machine learningArtificial intelligenceSignal processingimage processingvideo communication