🤖 AI Summary
This work addresses the limitations of existing general-purpose models, which struggle to balance design constraints with creative workflows in visual generation, and current agents, which lack autonomous creative planning capabilities. To overcome these challenges, we propose VisionCreator—an end-to-end learnable visual generation agent that integrates Understanding, Thinking, Planning, and Creation (UTPC) to autonomously produce complex visual content. Our key contributions include the construction of the VisGenData-4k dataset, a metacognition-based creative trajectory generation mechanism, and the introduction of Progressive Specialization Training (PST) and Virtual Reinforcement Learning (VRL), all optimized within a high-fidelity simulation environment. The resulting VisionCreator-8B/32B models outperform larger closed-source counterparts on the VisGenBench benchmark, demonstrating superior performance in multi-step visual creation tasks.
📝 Abstract
Visual content creation tasks demand a nuanced understanding of design conventions and creative workflows-capabilities challenging for general models, while workflow-based agents lack specialized knowledge for autonomous creative planning. To overcome these challenges, we propose VisionCreator, a native visual-generation agentic model that unifies Understanding, Thinking, Planning, and Creation (UTPC) capabilities within an end-to-end learnable framework. Our work introduces four key contributions: (i) VisGenData-4k and its construction methodology using metacognition-based VisionAgent to generate high-quality creation trajectories with explicit UTPC structures; (ii) The VisionCreator agentic model, optimized through Progressive Specialization Training (PST) and Virtual Reinforcement Learning (VRL) within a high-fidelity simulated environment, enabling stable and efficient acquisition of UTPC capabilities for complex creation tasks; (iii) VisGenBench, a comprehensive benchmark featuring 1.2k test samples across diverse scenarios for standardized evaluation of multi-step visual creation capabilities; (iv) Remarkably, our VisionCreator-8B/32B models demonstrate superior performance over larger closed-source models across multiple evaluation dimensions. Overall, this work provides a foundation for future research in visual-generation agentic systems.