π€ AI Summary
Existing storyboard generation methods based on static diffusion models suffer from limitations in dynamic expressiveness, prompt adherence, and multi-character consistency, while multi-agent frameworks often rely on unreliable evaluation mechanisms. This work proposes the first multi-agent storyboard generation framework integrating image-to-video (I2V) models, drawing inspiration from Disneyβs animation pipeline of βkey poses followed by in-betweening.β By leveraging the implicit motion priors inherent in I2V models, the approach enhances both character consistency and dynamic expressiveness. Furthermore, a hybrid objective-subjective review mechanism is introduced to enable iterative refinement. The method achieves state-of-the-art performance in character consistency, prompt fidelity, and stylized expression, and introduces the first human-annotated benchmark dataset for customized storyboard generation (CSG).
π Abstract
Custom Storyboard Generation (CSG) aims to produce high-quality, multi-character consistent storytelling. Current approaches based on static diffusion models, whether used in a one-shot manner or within multi-agent frameworks, face three key limitations: (1) Static models lack dynamic expressiveness and often resort to"copy-paste"pattern. (2) One-shot inference cannot iteratively correct missing attributes or poor prompt adherence. (3) Multi-agents rely on non-robust evaluators, ill-suited for assessing stylized, non-realistic animation. To address these, we propose AnimeAgent, the first Image-to-Video (I2V)-based multi-agent framework for CSG. Inspired by Disney's"Combination of Straight Ahead and Pose to Pose"workflow, AnimeAgent leverages I2V's implicit motion prior to enhance consistency and expressiveness, while a mixed subjective-objective reviewer enables reliable iterative refinement. We also collect a human-annotated CSG benchmark with ground-truth. Experiments show AnimeAgent achieves SOTA performance in consistency, prompt fidelity, and stylization.