GenAgent: Scaling Text-to-Image Generation via Agentic Multimodal Reasoning

📅 2026-01-26
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the high training costs and performance trade-offs between visual understanding and image generation in unified multimodal models, as well as the lack of dynamic interaction and autonomous reasoning in existing modular systems. To overcome these limitations, the authors propose an agent-based framework that treats image generation models as callable tools, decoupling comprehension from generation to enable a multi-turn, autonomous multimodal chain of thought involving reasoning, tool invocation, judgment, and reflection. The architecture supports cross-tool generalization, test-time scaling, and task-adaptive inference, thereby transcending the constraints of static pipelines. Through a two-stage training strategy—initial supervised fine-tuning followed by reinforcement learning that optimizes both image quality and reflection accuracy, augmented with trajectory resampling for enhanced exploration—the method achieves a 23.6% improvement over the FLUX.1-dev baseline on GenEval++ and a 14% gain on WISE, demonstrating its effectiveness and generalization capability.

Technology Category

Application Category

📝 Abstract
We introduce GenAgent, unifying visual understanding and generation through an agentic multimodal model. Unlike unified models that face expensive training costs and understanding-generation trade-offs, GenAgent decouples these capabilities through an agentic framework: understanding is handled by the multimodal model itself, while generation is achieved by treating image generation models as invokable tools. Crucially, unlike existing modular systems constrained by static pipelines, this design enables autonomous multi-turn interactions where the agent generates multimodal chains-of-thought encompassing reasoning, tool invocation, judgment, and reflection to iteratively refine outputs. We employ a two-stage training strategy: first, cold-start with supervised fine-tuning on high-quality tool invocation and reflection data to bootstrap agent behaviors; second, end-to-end agentic reinforcement learning combining pointwise rewards (final image quality) and pairwise rewards (reflection accuracy), with trajectory resampling for enhanced multi-turn exploration. GenAgent significantly boosts base generator(FLUX.1-dev) performance on GenEval++ (+23.6\%) and WISE (+14\%). Beyond performance gains, our framework demonstrates three key properties: 1) cross-tool generalization to generators with varying capabilities, 2) test-time scaling with consistent improvements across interaction rounds, and 3) task-adaptive reasoning that automatically adjusts to different tasks. Our code will be available at \href{https://github.com/deep-kaixun/GenAgent}{this url}.
Problem

Research questions and friction points this paper is trying to address.

text-to-image generation
multimodal reasoning
agentic framework
visual understanding
image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

agentic multimodal reasoning
tool-augmented generation
multi-turn reflective refinement
reinforcement learning with trajectory resampling
cross-tool generalization
🔎 Similar Papers
No similar papers found.