Character Mixing for Video Generation

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the core challenges of identity confusion, behavioral distortion, and style inconsistency in text-to-video generation when modeling interactions between cross-style characters (e.g., cartoon and photorealistic). To this end, we propose Cross-Character Embedding (CCE) and Cross-Character Augmentation (CCA). CCE decouples character identity representations from style modalities to enable semantic alignment across heterogeneous visual domains; CCA synthesizes multi-style training data to explicitly model behavioral logical consistency among interacting characters. Both components are jointly optimized within a unified multimodal framework to balance identity preservation, interaction coherence, and style robustness. Extensive experiments on a benchmark comprising 10 cross-style character pairs from cartoon/realistic TV series demonstrate significant improvements over baselines: +23.6% identity preservation rate, +19.4% interaction naturalness, and −37.2% style inconsistency rate. Our approach advances generative narrative synthesis toward multi-character, multi-style, high-fidelity video generation.

Technology Category

Application Category

📝 Abstract
Imagine Mr. Bean stepping into Tom and Jerry--can we generate videos where characters interact naturally across different worlds? We study inter-character interaction in text-to-video generation, where the key challenge is to preserve each character's identity and behaviors while enabling coherent cross-context interaction. This is difficult because characters may never have coexisted and because mixing styles often causes style delusion, where realistic characters appear cartoonish or vice versa. We introduce a framework that tackles these issues with Cross-Character Embedding (CCE), which learns identity and behavioral logic across multimodal sources, and Cross-Character Augmentation (CCA), which enriches training with synthetic co-existence and mixed-style data. Together, these techniques allow natural interactions between previously uncoexistent characters without losing stylistic fidelity. Experiments on a curated benchmark of cartoons and live-action series with 10 characters show clear improvements in identity preservation, interaction quality, and robustness to style delusion, enabling new forms of generative storytelling.Additional results and videos are available on our project page: https://tingtingliao.github.io/mimix/.
Problem

Research questions and friction points this paper is trying to address.

Preserving character identity and behaviors during cross-context video generation
Preventing style delusion when mixing realistic and cartoon characters together
Enabling natural interactions between characters from different fictional worlds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-Character Embedding preserves identity and behavior
Cross-Character Augmentation enriches training with synthetic data
Framework enables natural interactions across different character worlds
🔎 Similar Papers
No similar papers found.
Tingting Liao
Tingting Liao
PhD of MBZUAI
3D Human Generation
C
Chongjian Ge
Mohamed bin Zayed University of Artificial Intelligence
G
Guangyi Liu
Mohamed bin Zayed University of Artificial Intelligence
H
Hao Li
Mohamed bin Zayed University of Artificial Intelligence
Y
Yi Zhou
Mohamed bin Zayed University of Artificial Intelligence