🤖 AI Summary
Existing automated storyboarding methods struggle to simultaneously ensure inter-shot consistency and explicit editability. This work proposes StoryBlender, a framework that generates temporally coherent and directly editable storyboards within a unified 3D space through a three-stage pipeline: semantic-spatial anchoring, canonical asset instantiation, and spatiotemporal dynamics modeling. Key innovations include a story-centric reflection mechanism, a continuity memory graph, and a hierarchical multi-agent validation loop integrated with engine-based feedback correction. Experimental results demonstrate that StoryBlender significantly outperforms current diffusion-based and 3D baselines in identity consistency and editing precision, enabling robust multi-shot continuity and efficient manipulation of native 3D scenes.
📝 Abstract
Storyboarding is a core skill in visual storytelling for film, animation, and games. However, automating this process requires a system to achieve two properties that current approaches rarely satisfy simultaneously: inter-shot consistency and explicit editability. While 2D diffusion-based generators produce vivid imagery, they often suffer from identity drift along with limited geometric control; conversely, traditional 3D animation workflows are consistent and editable but require expert-heavy, labor-intensive authoring. We present StoryBlender, a grounded 3D storyboard generation framework governed by a Story-centric Reflection Scheme. At its core, we propose the StoryBlender system, which is built on a three-stage pipeline: (1) Semantic-Spatial Grounding, to construct a continuity memory graph to decouple global assets from shot-specific variables for long-horizon consistency; (2) Canonical Asset Materialization, to instantiate entities in a unified coordinate space to maintain visual identity; and (3) Spatial-Temporal Dynamics, to achieve layout design and cinematic evolution through visual metrics. By orchestrating multiple agents in a hierarchical manner within a verification loop, StoryBlender iteratively self-corrects spatial hallucinations via engine-verified feedback. The resulting native 3D scenes support direct, precise editing of cameras and visual assets while preserving unwavering multi-shot continuity. Experiments demonstrate that StoryBlender significantly improves consistency and editability over both diffusion-based and 3D-grounded baselines. Code, data, and demonstration video will be available on https://engineeringai-lab.github.io/StoryBlender/