🤖 AI Summary
This work addresses the challenge of simultaneously achieving character controllability, motion generalization, and realistic scene interaction in animated character video synthesis. We propose the first monocular depth-guided, 3D-aware spatial decomposition framework. Our method decomposes videos into three layers—character, scene, and floating occluders—each encoded as disentangled identity, motion, and scene control codes, enabling arbitrary character replacement, novel 3D motion transfer, and complex dynamic scene composition. Technically, we integrate monocular depth estimation, hierarchical spatial encoding, pre-trained diffusion models, and 3D-aware controllable generation. Extensive evaluations on multiple benchmarks demonstrate significant improvements in motion generalization and scene interaction fidelity. The framework supports zero-shot character swapping and efficient inference, requiring no multi-view supervision.
📝 Abstract
Character video synthesis aims to produce realistic videos of animatable characters within lifelike scenes. As a fundamental problem in the computer vision and graphics community, 3D works typically require multi-view captures for per-case training, which severely limits their applicability of modeling arbitrary characters in a short time. Recent 2D methods break this limitation via pre-trained diffusion models, but they struggle for pose generality and scene interaction. To this end, we propose MIMO, a novel framework which can not only synthesize character videos with controllable attributes (i.e., character, motion and scene) provided by simple user inputs, but also simultaneously achieve advanced scalability to arbitrary characters, generality to novel 3D motions, and applicability to interactive real-world scenes in a unified framework. The core idea is to encode the 2D video to compact spatial codes, considering the inherent 3D nature of video occurrence. Concretely, we lift the 2D frame pixels into 3D using monocular depth estimators, and decompose the video clip to three spatial components (i.e., main human, underlying scene, and floating occlusion) in hierarchical layers based on the 3D depth. These components are further encoded to canonical identity code, structured motion code and full scene code, which are utilized as control signals of synthesis process. The design of spatial decomposed modeling enables flexible user control, complex motion expression, as well as 3D-aware synthesis for scene interactions. Experimental results demonstrate effectiveness and robustness of the proposed method.