DefectFill: Realistic Defect Generation with Inpainting Diffusion Model for Visual Inspection

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the performance bottleneck of industrial visual inspection models caused by scarce defective samples, this paper proposes a defect image generation method based on fine-tuning the Stable Diffusion inpainting model. Leveraging only a small number of real defective images, it synthesizes high-fidelity, precisely localized defect samples. Key contributions include: (1) a multi-objective customized loss function integrating defect structural priors, object-level semantic constraints, and spatial attention mechanisms; and (2) an automated low-fidelity sample filtering mechanism to enhance generation quality. Evaluated on the MVTec AD dataset, the generated defect images achieve state-of-the-art fidelity and significantly improve downstream anomaly detection performance—yielding substantial gains in both AUC and PRO metrics.

Technology Category

Application Category

📝 Abstract
Developing effective visual inspection models remains challenging due to the scarcity of defect data. While image generation models have been used to synthesize defect images, producing highly realistic defects remains difficult. We propose DefectFill, a novel method for realistic defect generation that requires only a few reference defect images. It leverages a fine-tuned inpainting diffusion model, optimized with our custom loss functions incorporating defect, object, and attention terms. It enables precise capture of detailed, localized defect features and their seamless integration into defect-free objects. Additionally, our Low-Fidelity Selection method further enhances the defect sample quality. Experiments show that DefectFill generates high-quality defect images, enabling visual inspection models to achieve state-of-the-art performance on the MVTec AD dataset.
Problem

Research questions and friction points this paper is trying to address.

Generates realistic defects for visual inspection models
Uses few reference images to create detailed defect features
Enhances defect sample quality with Low-Fidelity Selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned inpainting diffusion model for defect generation
Custom loss functions with defect, object, attention terms
Low-Fidelity Selection method enhances defect sample quality
🔎 Similar Papers
No similar papers found.
J
Jaewoo Song
Department of Electrical and Computer Engineering, Seoul National University; Global Technology Research, Samsung Electronics
Daemin Park
Daemin Park
Department of Electrical and Computer Engineering, Seoul National University
K
Kanghyun Baek
IPAI, AIIS, ASRI, INMC, ISRC, Seoul National University
S
Sangyub Lee
IPAI, AIIS, ASRI, INMC, ISRC, Seoul National University
Jooyoung Choi
Jooyoung Choi
Seoul National University
Deep Generative Models
E
Eunji Kim
Department of Electrical and Computer Engineering, Seoul National University
Sungroh Yoon
Sungroh Yoon
Professor, Electrical and Computer Engineering & Artificial Intelligence, Seoul National University
AIdeep learningmachine learningon-device AIbioinformatics