Anywhere: A Multi-Agent Framework for Reliable and Diverse Foreground-Conditioned Image Inpainting

📅 2024-04-29
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing foreground-conditioned image generation methods suffer from poor object integrity, foreground-background inconsistency, limited diversity, and inflexible control—stemming from end-to-end inpainting models’ sensitivity to mask noise, weak foreground semantic understanding, data distribution bias, and interference from text-image prompts. This paper proposes the first closed-loop, multi-agent collaborative framework integrating vision-language models (VLMs), large language models (LLMs), and diffusion models, operating in stages: semantic parsing, edge-guided foreground-aware generation, dynamic prompt regeneration, and quality-feedback-driven iterative refinement. Innovations include a Canny-edge-guided controllable synthesis module and an image fusion-based refinement module, coupled with a multidimensional evaluation-driven optimization mechanism. Experiments demonstrate significant suppression of over-imagining, improved foreground-background consistency and aesthetic quality, a 32% diversity gain over state-of-the-art methods, and superior reliability—the highest reported to date.

Technology Category

Application Category

📝 Abstract
Recent advancements in image inpainting, particularly through diffusion modeling, have yielded promising outcomes. However, when tested in scenarios involving the completion of images based on the foreground objects, current methods that aim to inpaint an image in an end-to-end manner encounter challenges such as"over-imagination", inconsistency between foreground and background, and limited diversity. In response, we introduce Anywhere, a pioneering multi-agent framework designed to address these issues. Anywhere utilizes a sophisticated pipeline framework comprising various agents such as Visual Language Model (VLM), Large Language Model (LLM), and image generation models. This framework consists of three principal components: the prompt generation module, the image generation module, and the outcome analyzer. The prompt generation module conducts a semantic analysis of the input foreground image, leveraging VLM to predict relevant language descriptions and LLM to recommend optimal language prompts. In the image generation module, we employ a text-guided canny-to-image generation model to create a template image based on the edge map of the foreground image and language prompts, and an image refiner to produce the outcome by blending the input foreground and the template image. The outcome analyzer employs VLM to evaluate image content rationality, aesthetic score, and foreground-background relevance, triggering prompt and image regeneration as needed. Extensive experiments demonstrate that our Anywhere framework excels in foreground-conditioned image inpainting, mitigating"over-imagination", resolving foreground-background discrepancies, and enhancing diversity. It successfully elevates foreground-conditioned image inpainting to produce more reliable and diverse results.
Problem

Research questions and friction points this paper is trying to address.

Overcoming foreground-conditioned image generation challenges
Enhancing image diversity and control flexibility
Improving fidelity and quality with multi-agent framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework design
User-guided textual inputs
Automated quality assessment
🔎 Similar Papers
No similar papers found.
T
Tianyidan Xie
State Key Laboratory of Novel Software Technology, Nanjing University
R
Rui Ma
Jilin University
Q
Qian Wang
China Mobile Communications Group Co., Ltd
X
Xiaoqian Ye
China Mobile Communications Group Co., Ltd
F
Feixuan Liu
Larkagent AI
Y
Ying Tai
State Key Laboratory of Novel Software Technology, Nanjing University
Z
Zhenyu Zhang
State Key Laboratory of Novel Software Technology, Nanjing University
Z
Zili Yi
State Key Laboratory of Novel Software Technology, Nanjing University