๐ค AI Summary
To address the lack of standardized evaluation benchmarks for story visualization, this paper introduces ViStoryBenchโthe first comprehensive benchmark for assessing narrative image generation. It encompasses diverse narrative genres (e.g., comedy, horror), artistic styles (e.g., animation, 3D rendering), single/multi-character configurations, and complex world-building scenarios, evaluating models along three core dimensions: narrative consistency, character coherence, and visual aesthetics. Methodologically, ViStoryBench proposes a novel, balanced, fine-grained evaluation framework integrating semantic alignment, visual consistency metrics, character ID stability analysis, and a hybrid human-automated assessment protocol. Experimental results demonstrate that ViStoryBench effectively diagnoses narrative logical flaws and visual discontinuities in long-sequence image generation by state-of-the-art models, significantly enhancing evaluation comparability and interpretability. This work fills a critical gap by establishing the first standardized, multidimensional benchmark for story visualization.
๐ Abstract
Story visualization, which aims to generate a sequence of visually coherent images aligning with a given narrative and reference images, has seen significant progress with recent advancements in generative models. To further enhance the performance of story visualization frameworks in real-world scenarios, we introduce a comprehensive evaluation benchmark, ViStoryBench. We collect a diverse dataset encompassing various story types and artistic styles, ensuring models are evaluated across multiple dimensions such as different plots (e.g., comedy, horror) and visual aesthetics (e.g., anime, 3D renderings). ViStoryBench is carefully curated to balance narrative structures and visual elements, featuring stories with single and multiple protagonists to test models' ability to maintain character consistency. Additionally, it includes complex plots and intricate world-building to challenge models in generating accurate visuals. To ensure comprehensive comparisons, our benchmark incorporates a wide range of evaluation metrics assessing critical aspects. This structured and multifaceted framework enables researchers to thoroughly identify both the strengths and weaknesses of different models, fostering targeted improvements.