OmniTransfer: All-in-one Framework for Spatio-temporal Video Transfer

📅 2026-01-20
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing video customization methods rely on reference images or task-specific temporal priors, which struggle to fully exploit the intrinsic spatiotemporal information in videos, thereby limiting generation flexibility and generalization. This work proposes OmniTransfer, a unified framework that enhances appearance consistency through multi-view inter-frame information and integrates temporal cues for fine-grained temporal control. OmniTransfer introduces three key mechanisms: task-aware positional bias, reference-decoupled causal learning, and task-adaptive multimodal alignment. Notably, it achieves high-quality motion transfer without requiring pose annotations—a first in the field—and unifies support for diverse video transfer tasks. Experiments demonstrate that OmniTransfer outperforms existing approaches in identity and style transfer as well as camera motion and visual effect generation, while matching pose-based models in motion transfer fidelity, enabling highly realistic and flexible video synthesis.

Technology Category

Application Category

📝 Abstract
Videos convey richer information than images or text, capturing both spatial and temporal dynamics. However, most existing video customization methods rely on reference images or task-specific temporal priors, failing to fully exploit the rich spatio-temporal information inherent in videos, thereby limiting flexibility and generalization in video generation. To address these limitations, we propose OmniTransfer, a unified framework for spatio-temporal video transfer. It leverages multi-view information across frames to enhance appearance consistency and exploits temporal cues to enable fine-grained temporal control. To unify various video transfer tasks, OmniTransfer incorporates three key designs: Task-aware Positional Bias that adaptively leverages reference video information to improve temporal alignment or appearance consistency; Reference-decoupled Causal Learning separating reference and target branches to enable precise reference transfer while improving efficiency; and Task-adaptive Multimodal Alignment using multimodal semantic guidance to dynamically distinguish and tackle different tasks. Extensive experiments show that OmniTransfer outperforms existing methods in appearance (ID and style) and temporal transfer (camera movement and video effects), while matching pose-guided methods in motion transfer without using pose, establishing a new paradigm for flexible, high-fidelity video generation.
Problem

Research questions and friction points this paper is trying to address.

video customization
spatio-temporal information
temporal priors
video generation
reference images
Innovation

Methods, ideas, or system contributions that make the work stand out.

OmniTransfer
spatio-temporal video transfer
reference-decoupled causal learning
task-aware positional bias
multimodal alignment
🔎 Similar Papers
No similar papers found.
P
Pengze Zhang
Intelligent Creation Lab, ByteDance
Yanze Wu
Yanze Wu
ByteDance
computer vision
M
Mengtian Li
Intelligent Creation Lab, ByteDance
X
Xu Bai
Intelligent Creation Lab, ByteDance
S
Songtao Zhao
Intelligent Creation Lab, ByteDance
Fulong Ye
Fulong Ye
ByteDance
Vision-Language PretrainGenerative modelsDiffusion Models
Chong Mou
Chong Mou
Peking University
Diffusion ModelAI Generated ContentLow-level Computer Vision
X
Xinghui Li
Intelligent Creation Lab, ByteDance
Zhuowei Chen
Zhuowei Chen
Bytedance
Video GenerationMultimodal Generation
Qian He
Qian He
ByteDance
Mingyuan Gao
Mingyuan Gao
Professor, Institute of Chemistry, Chinese Academy of Sciences