VisionDirector: Vision-Language Guided Closed-Loop Refinement for Generative Image Synthesis

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In professional design scenarios, long, multi-objective vision-language prompts—such as jointly optimizing layout, typography, logo fidelity, and localized object placement—exceed the capabilities of current generative models, achieving insufficient satisfaction (≤72%) for tightly coupled objectives. To address this, we introduce Long Goal Bench, a benchmark comprising 2,000 complex design tasks, and propose VisionDirector: a training-free visual-language supervision framework. VisionDirector features structured goal parsing, semantic-verification-driven fine-grained grid sampling with rollback, dynamic single-step or stepwise editing decisions, and Group Relative Policy Optimization (GRPO) for closed-loop editing control. Evaluated on GenEval and ImgEdit, VisionDirector achieves new state-of-the-art performance (+7% overall score, +0.07 absolute score), significantly improving font rendering accuracy, multi-object scene consistency, and pose editing quality.

Technology Category

Application Category

📝 Abstract
Generative models can now produce photorealistic imagery, yet they still struggle with the long, multi-goal prompts that professional designers issue. To expose this gap and better evaluate models' performance in real-world settings, we introduce Long Goal Bench (LGBench), a 2,000-task suite (1,000 T2I and 1,000 I2I) whose average instruction contains 18 to 22 tightly coupled goals spanning global layout, local object placement, typography, and logo fidelity. We find that even state-of-the-art models satisfy fewer than 72 percent of the goals and routinely miss localized edits, confirming the brittleness of current pipelines. To address this, we present VisionDirector, a training-free vision-language supervisor that (i) extracts structured goals from long instructions, (ii) dynamically decides between one-shot generation and staged edits, (iii) runs micro-grid sampling with semantic verification and rollback after every edit, and (iv) logs goal-level rewards. We further fine-tune the planner with Group Relative Policy Optimization, yielding shorter edit trajectories (3.1 versus 4.2 steps) and stronger alignment. VisionDirector achieves new state of the art on GenEval (plus 7 percent overall) and ImgEdit (plus 0.07 absolute) while producing consistent qualitative improvements on typography, multi-object scenes, and pose editing.
Problem

Research questions and friction points this paper is trying to address.

Addresses generative models' struggle with long, multi-goal prompts
Introduces a benchmark to evaluate models on complex real-world tasks
Proposes a training-free supervisor for goal-aligned image refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-language supervisor extracts structured goals from long instructions
Dynamic decision between one-shot generation and staged edits
Micro-grid sampling with semantic verification and rollback after edits
M
Meng Chu
The Hong Kong University of Science and Technology
Senqiao Yang
Senqiao Yang
The Chinese University of Hong Kong
Haoxuan Che
Haoxuan Che
Hong Kong University of Science and Technology
Interactive Video GenerationModel Generalization
S
Suiyun Zhang
Huawei Research
Xichen Zhang
Xichen Zhang
The Hong Kong University of Science and Technology
Shaozuo Yu
Shaozuo Yu
CUHK
Computer ScienceComputer VisionNatural Language Processing
H
Haokun Gui
The Hong Kong University of Science and Technology
Z
Zhefan Rao
The Hong Kong University of Science and Technology
D
Dandan Tu
Huawei Research
R
Rui Liu
Huawei Research
Jiaya Jia
Jiaya Jia
Chair Professor, HKUST; Adjunct Prof., CUHK
Artificial IntelligenceComputer VisionDeep Learning