π€ AI Summary
Existing methods struggle to generate temporally coherent image sequences from open-domain textual stories, particularly in preserving both visual appearance and semantic consistency of characters across frames. To address this, we propose an LLM-driven Story Director framework coupled with a Multi-Subject Consistent Diffusion (MSD) model. Our approach introduces Masked Mutual Self- and Cross-Attention (MMSA/MMCA), a novel attention mechanism that mitigates multi-character confusion by enforcing role-aware feature alignment. We further integrate multimodal anchor guidance and evaluate performance on the DS-500 benchmarkβa newly established protocol for story-to-video generation. On DS-500, our method achieves substantial improvements: +12.3% in subject identification accuracy and +28.6% in subjective cross-frame consistency scores. Comprehensive objective and subjective evaluations demonstrate consistent superiority over state-of-the-art methods, establishing a scalable, highly controllable paradigm for narrative visualization.
π Abstract
Story visualization aims to create visually compelling images or videos corresponding to textual narratives. Despite recent advances in diffusion models yielding promising results, existing methods still struggle to create a coherent sequence of subject-consistent frames based solely on a story. To this end, we propose DreamStory, an automatic open-domain story visualization framework by leveraging the LLMs and a novel multi-subject consistent diffusion model. DreamStory consists of (1) an LLM acting as a story director and (2) an innovative Multi-Subject consistent Diffusion model (MSD) for generating consistent multi-subject across the images. First, DreamStory employs the LLM to generate descriptive prompts for subjects and scenes aligned with the story, annotating each scene's subjects for subsequent subject-consistent generation. Second, DreamStory utilizes these detailed subject descriptions to create portraits of the subjects, with these portraits and their corresponding textual information serving as multimodal anchors (guidance). Finally, the MSD uses these multimodal anchors to generate story scenes with consistent multi-subject. Specifically, the MSD includes Masked Mutual Self-Attention (MMSA) and Masked Mutual Cross-Attention (MMCA) modules. MMSA and MMCA modules ensure appearance and semantic consistency with reference images and text, respectively. Both modules employ masking mechanisms to prevent subject blending. To validate our approach and promote progress in story visualization, we established a benchmark, DS-500, which can assess the overall performance of the story visualization framework, subject-identification accuracy, and the consistency of the generation model. Extensive experiments validate the effectiveness of DreamStory in both subjective and objective evaluations. Please visit our project homepage at https://dream-xyz.github.io/dreamstory.