🤖 AI Summary
This work addresses the challenge of fine-scale distortions in localized regions—such as limbs, facial features, and text—commonly observed in text-to-image (T2I) generation, where existing methods often suffer from semantic drift or limited editability. To overcome these limitations, we propose a hierarchical decision-driven multi-agent retouching framework that, for the first time, integrates the human perception–reasoning–action mechanism into T2I post-processing. The framework employs a perception agent to localize distortions, a reasoning agent to align edits with user preferences, and an action agent to perform localized inpainting, enabling fine-grained self-corrective generation. We introduce the GenBlemish-27K dataset to support this approach and demonstrate significant improvements over state-of-the-art methods in perceptual quality, distortion localization accuracy, and alignment with human preferences, thereby enhancing both local realism and controllability of generated images.
📝 Abstract
Text-to-image (T2I) diffusion models such as SDXL and FLUX have achieved impressive photorealism, yet small-scale distortions remain pervasive in limbs, face, text and so on. Existing refinement approaches either perform costly iterative re-generation or rely on vision-language models (VLMs) with weak spatial grounding, leading to semantic drift and unreliable local edits. To close this gap, we propose Agentic Retoucher, a hierarchical decision-driven framework that reformulates post-generation correction as a human-like perception-reasoning-action loop. Specifically, we design (1) a perception agent that learns contextual saliency for fine-grained distortion localization under text-image consistency cues, (2) a reasoning agent that performs human-aligned inferential diagnosis via progressive preference alignment, and (3) an action agent that adaptively plans localized inpainting guided by user preference. This design integrates perceptual evidence, linguistic reasoning, and controllable correction into a unified, self-corrective decision process. To enable fine-grained supervision and quantitative evaluation, we further construct GenBlemish-27K, a dataset of 6K T2I images with 27K annotated artifact regions across 12 categories. Extensive experiments demonstrate that Agentic Retoucher consistently outperforms state-of-the-art methods in perceptual quality, distortion localization and human preference alignment, establishing a new paradigm for self-corrective and perceptually reliable T2I generation.