MoCha:End-to-End Video Character Replacement without Structural Guidance

📅 2026-01-13
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes MoCha, a novel end-to-end framework for controllable video character replacement that eliminates the need for paired training data, per-frame segmentation masks, and explicit structural guidance such as skeletons or depth maps—limitations that often lead to artifacts and temporal inconsistencies under occlusion, complex interactions, unusual poses, or challenging lighting. MoCha requires only a single arbitrary-frame mask and introduces condition-aware RoPE positional encoding to enhance multimodal alignment and identity preservation. A reinforcement learning–based post-training strategy further refines generation quality. To address the scarcity of paired data, the authors construct a triple-pronged data pipeline combining high-fidelity UE5 rendering, expression-driven synthesis, and video-mask augmentation. Extensive experiments demonstrate that MoCha significantly outperforms existing methods across diverse complex scenarios, achieving superior generation quality, identity fidelity, and temporal consistency.

Technology Category

Application Category

📝 Abstract
Controllable video character replacement with a user-provided identity remains a challenging problem due to the lack of paired video data. Prior works have predominantly relied on a reconstruction-based paradigm that requires per-frame segmentation masks and explicit structural guidance (e.g., skeleton, depth). This reliance, however, severely limits their generalizability in complex scenarios involving occlusions, character-object interactions, unusual poses, or challenging illumination, often leading to visual artifacts and temporal inconsistencies. In this paper, we propose MoCha, a pioneering framework that bypasses these limitations by requiring only a single arbitrary frame mask. To effectively adapt the multi-modal input condition and enhance facial identity, we introduce a condition-aware RoPE and employ an RL-based post-training stage. Furthermore, to overcome the scarcity of qualified paired-training data, we propose a comprehensive data construction pipeline. Specifically, we design three specialized datasets: a high-fidelity rendered dataset built with Unreal Engine 5 (UE5), an expression-driven dataset synthesized by current portrait animation techniques, and an augmented dataset derived from existing video-mask pairs. Extensive experiments demonstrate that our method substantially outperforms existing state-of-the-art approaches. We will release the code to facilitate further research. Please refer to our project page for more details: orange-3dv-team.github.io/MoCha
Problem

Research questions and friction points this paper is trying to address.

video character replacement
controllable identity
paired video data
temporal consistency
visual artifacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

video character replacement
end-to-end learning
structural guidance-free
condition-aware RoPE
reinforcement learning post-training
🔎 Similar Papers
No similar papers found.
Z
Zhengbo Xu
J
Jie Ma
Z
Ziheng Wang
Z
Zhan Peng
Jun Liang
Jun Liang
Cardiff University
J
Jing Li