🤖 AI Summary
This paper addresses the core challenges of identity confusion, behavioral distortion, and style inconsistency in text-to-video generation when modeling interactions between cross-style characters (e.g., cartoon and photorealistic). To this end, we propose Cross-Character Embedding (CCE) and Cross-Character Augmentation (CCA). CCE decouples character identity representations from style modalities to enable semantic alignment across heterogeneous visual domains; CCA synthesizes multi-style training data to explicitly model behavioral logical consistency among interacting characters. Both components are jointly optimized within a unified multimodal framework to balance identity preservation, interaction coherence, and style robustness. Extensive experiments on a benchmark comprising 10 cross-style character pairs from cartoon/realistic TV series demonstrate significant improvements over baselines: +23.6% identity preservation rate, +19.4% interaction naturalness, and −37.2% style inconsistency rate. Our approach advances generative narrative synthesis toward multi-character, multi-style, high-fidelity video generation.
📝 Abstract
Imagine Mr. Bean stepping into Tom and Jerry--can we generate videos where characters interact naturally across different worlds? We study inter-character interaction in text-to-video generation, where the key challenge is to preserve each character's identity and behaviors while enabling coherent cross-context interaction. This is difficult because characters may never have coexisted and because mixing styles often causes style delusion, where realistic characters appear cartoonish or vice versa. We introduce a framework that tackles these issues with Cross-Character Embedding (CCE), which learns identity and behavioral logic across multimodal sources, and Cross-Character Augmentation (CCA), which enriches training with synthetic co-existence and mixed-style data. Together, these techniques allow natural interactions between previously uncoexistent characters without losing stylistic fidelity. Experiments on a curated benchmark of cartoons and live-action series with 10 characters show clear improvements in identity preservation, interaction quality, and robustness to style delusion, enabling new forms of generative storytelling.Additional results and videos are available on our project page: https://tingtingliao.github.io/mimix/.