AnimeAgent: Is the Multi-Agent via Image-to-Video models a Good Disney Storytelling Artist?

πŸ“… 2026-02-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing storyboard generation methods based on static diffusion models suffer from limitations in dynamic expressiveness, prompt adherence, and multi-character consistency, while multi-agent frameworks often rely on unreliable evaluation mechanisms. This work proposes the first multi-agent storyboard generation framework integrating image-to-video (I2V) models, drawing inspiration from Disney’s animation pipeline of β€œkey poses followed by in-betweening.” By leveraging the implicit motion priors inherent in I2V models, the approach enhances both character consistency and dynamic expressiveness. Furthermore, a hybrid objective-subjective review mechanism is introduced to enable iterative refinement. The method achieves state-of-the-art performance in character consistency, prompt fidelity, and stylized expression, and introduces the first human-annotated benchmark dataset for customized storyboard generation (CSG).

Technology Category

Application Category

πŸ“ Abstract
Custom Storyboard Generation (CSG) aims to produce high-quality, multi-character consistent storytelling. Current approaches based on static diffusion models, whether used in a one-shot manner or within multi-agent frameworks, face three key limitations: (1) Static models lack dynamic expressiveness and often resort to"copy-paste"pattern. (2) One-shot inference cannot iteratively correct missing attributes or poor prompt adherence. (3) Multi-agents rely on non-robust evaluators, ill-suited for assessing stylized, non-realistic animation. To address these, we propose AnimeAgent, the first Image-to-Video (I2V)-based multi-agent framework for CSG. Inspired by Disney's"Combination of Straight Ahead and Pose to Pose"workflow, AnimeAgent leverages I2V's implicit motion prior to enhance consistency and expressiveness, while a mixed subjective-objective reviewer enables reliable iterative refinement. We also collect a human-annotated CSG benchmark with ground-truth. Experiments show AnimeAgent achieves SOTA performance in consistency, prompt fidelity, and stylization.
Problem

Research questions and friction points this paper is trying to address.

Custom Storyboard Generation
multi-character consistency
dynamic expressiveness
stylized animation evaluation
prompt adherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Image-to-Video
Multi-Agent Framework
Custom Storyboard Generation
Motion Prior
Iterative Refinement
πŸ”Ž Similar Papers
No similar papers found.