🤖 AI Summary
Existing controllable visual storytelling datasets struggle to simultaneously achieve fine-grained control over transient attributes (e.g., pose, expression, scene) and maintain cross-frame character identity consistency.
Method: We introduce the first large-scale multimodal narrative dataset—comprising 2,000 unique stylized characters and 10,000 illustrated stories—enabling joint modeling of large-scale character uniqueness and disentangled control signals (pose/expression/scene). A human-in-the-loop generation pipeline is proposed, integrating expert-validated templates, LLM-driven narrative planning, MMLM-based quality assessment, automated prompt optimization, and localized image editing, all governed by a quality-gating feedback loop to ensure pixel-level alignment and persistent identity preservation.
Contribution/Results: Models fine-tuned on this dataset match state-of-the-art closed-source models in controllability and temporal coherence, establishing a new benchmark for structured visual storytelling.
📝 Abstract
Sequential identity consistency under precise transient attribute control remains a long-standing challenge in controllable visual storytelling. Existing datasets lack sufficient fidelity and fail to disentangle stable identities from transient attributes, limiting structured control over pose, expression, and scene composition and thus constraining reliable sequential synthesis. To address this gap, we introduce extbf{2K-Characters-10K-Stories}, a multi-modal stylized narrative dataset of extbf{2{,}000} uniquely stylized characters appearing across extbf{10{,}000} illustration stories. It is the first dataset that pairs large-scale unique identities with explicit, decoupled control signals for sequential identity consistency. We introduce a extbf{Human-in-the-Loop pipeline (HiL)} that leverages expert-verified character templates and LLM-guided narrative planning to generate highly-aligned structured data. A extbf{decoupled control} scheme separates persistent identity from transient attributes -- pose and expression -- while a extbf{Quality-Gated loop} integrating MMLM evaluation, Auto-Prompt Tuning, and Local Image Editing enforces pixel-level consistency. Extensive experiments demonstrate that models fine-tuned on our dataset achieves performance comparable to closed-source models in generating visual narratives.