🤖 AI Summary
Existing unified multimodal models are limited in generating realistic images involving long-tail and knowledge-intensive concepts due to their reliance on static, parameterized knowledge. This work introduces an agent-based architecture for world-grounded image generation, proposing an end-to-end pipeline that tightly integrates prompt understanding, multimodal evidence retrieval, external knowledge rephrasing, and image synthesis. The approach couples reasoning, retrieval, and generation within a unified framework. To support training, the authors construct a data pipeline comprising 143K high-quality agent trajectories and introduce FactIP, a new benchmark for evaluating knowledge grounding capabilities. Experiments demonstrate that the proposed method significantly outperforms current unified models across multiple benchmarks and real-world tasks, achieving world knowledge handling performance comparable to the strongest closed-source systems.
📝 Abstract
Unified multimodal models provide a natural and promising architecture for understanding diverse and complex real-world knowledge while generating high-quality images. However, they still rely primarily on frozen parametric knowledge, which makes them struggle with real-world image generation involving long-tail and knowledge-intensive concepts. Inspired by the broad success of agents on real-world tasks, we explore agentic modeling to address this limitation. Specifically, we present Unify-Agent, a unified multimodal agent for world-grounded image synthesis, which reframes image generation as an agentic pipeline consisting of prompt understanding, multimodal evidence searching, grounded recaptioning, and final synthesis. To train our model, we construct a tailored multimodal data pipeline and curate 143K high-quality agent trajectories for world-grounded image synthesis, enabling effective supervision over the full agentic generation process. We further introduce FactIP, a benchmark covering 12 categories of culturally significant and long-tail factual concepts that explicitly requires external knowledge grounding. Extensive experiments show that our proposed Unify-Agent substantially improves over its base unified model across diverse benchmarks and real world generation tasks, while approaching the world knowledge capabilities of the strongest closed-source models. As an early exploration of agent-based modeling for world-grounded image synthesis, our work highlights the value of tightly coupling reasoning, searching, and generation for reliable open-world agentic image synthesis.