🤖 AI Summary
Text-to-image models frequently produce anatomical artifacts in human portraits—such as distorted, missing, or redundant limbs—severely compromising structural consistency and visual fidelity. To address this, we introduce HAD, the first large-scale Human Artifact Detection dataset (37,000+ images), and propose HADM, a cross-model generalizable artifact detector. We formally define and precisely localize structural anomalies in generated human imagery, establish a detection-driven feedback fine-tuning paradigm for diffusion models, and design a plug-and-play iterative inpainting-based repair framework. Our method integrates YOLO/DETR-based detection, multi-source domain-generalized training, RLHF-inspired feedback fine-tuning, and detection-guided inpainting. Experiments show HADM achieves 82.6% average localization accuracy on unseen generator outputs; fine-tuned models reduce human artifact rates by 39.4%; and the repair framework significantly improves anatomical plausibility and image quality.
📝 Abstract
Despite recent advancements, text-to-image generation models often produce images containing artifacts, especially in human figures. These artifacts appear as poorly generated human bodies, including distorted, missing, or extra body parts, leading to visual inconsistencies with typical human anatomy and greatly impairing overall fidelity. In this study, we address this challenge by curating Human Artifact Dataset (HAD), a diverse dataset specifically designed to localize human artifacts. HAD comprises over 37,000 images generated by several popular text-to-image models, annotated for human artifact localization. Using this dataset, we train the Human Artifact Detection Models (HADM), which can identify different artifacts across multiple generative domains and demonstrate strong generalization, even on images from unseen generators. Additionally, to further improve generators' perception of human structural coherence, we use the predictions from our HADM as feedback for diffusion model finetuning. Our experiments confirm a reduction in human artifacts in the resulting model. Furthermore, we showcase a novel application of our HADM in an iterative inpainting framework to correct human artifacts in arbitrary images directly, demonstrating its utility in improving image quality. Our dataset and detection models are available at: https://github.com/wangkaihong/HADM.