Yuan: Yielding Unblemished Aesthetics Through A Unified Network for Visual Imperfections Removal in Generated Images

📅 2025-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative AI images often exhibit visual defects—including anatomical distortions, object misalignment, and text misplacement—that hinder real-world deployment. To address this, we propose an end-to-end automatic repair framework. Our method introduces a novel dual-conditioned mask generation mechanism that jointly leverages textual prompts and image segmentation for precise, annotation-free defect localization. We further design a context-aware fine-grained redrawing module to preserve both semantic consistency and visual fidelity. Technically, the framework integrates a diffusion-based conditional mask prediction network with an adaptive inpainting module featuring multi-scale feature alignment. Additionally, we establish a joint evaluation metric combining NIQE, BRISQUE, and PI. Extensive experiments on ImageNet100, Stanford Dogs, and a custom dataset demonstrate an average 12.7% improvement in quantitative metrics and a 91.3% human evaluation pass rate—substantially outperforming state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
Generative AI presents transformative potential across various domains, from creative arts to scientific visualization. However, the utility of AI-generated imagery is often compromised by visual flaws, including anatomical inaccuracies, improper object placements, and misplaced textual elements. These imperfections pose significant challenges for practical applications. To overcome these limitations, we introduce extit{Yuan}, a novel framework that autonomously corrects visual imperfections in text-to-image synthesis. extit{Yuan} uniquely conditions on both the textual prompt and the segmented image, generating precise masks that identify areas in need of refinement without requiring manual intervention -- a common constraint in previous methodologies. Following the automated masking process, an advanced inpainting module seamlessly integrates contextually coherent content into the identified regions, preserving the integrity and fidelity of the original image and associated text prompts. Through extensive experimentation on publicly available datasets such as ImageNet100 and Stanford Dogs, along with a custom-generated dataset, extit{Yuan} demonstrated superior performance in eliminating visual imperfections. Our approach consistently achieved higher scores in quantitative metrics, including NIQE, BRISQUE, and PI, alongside favorable qualitative evaluations. These results underscore extit{Yuan}'s potential to significantly enhance the quality and applicability of AI-generated images across diverse fields.
Problem

Research questions and friction points this paper is trying to address.

Generative AI
Visual Errors
Image Synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Yuan Method
AI Image Correction
Automatic Error Detection
🔎 Similar Papers
No similar papers found.
Z
Zhenyu Yu
Faculty of Computer Science and Information Technology, Universiti Malaya, Kuala Lumpur, 50603, Malaysia
Chee Seng Chan
Chee Seng Chan
Universiti Malaya, Malaysia
Computer VisionMachine LearningImage Processing