MoAlign: Motion-Centric Representation Alignment for Video Diffusion Models

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-video diffusion models often suffer from temporal incoherence and physically implausible motion due to inadequate modeling of complex dynamics. To address this, we propose an action-centric representation alignment framework that introduces a novel motion-decoupled subspace. Leveraging pre-extracted motion features—such as optical flow—from a frozen, pretrained video encoder, our method aligns the diffusion model’s latent space with physically grounded motion representations. The approach integrates optical-flow-guided self-supervised learning, feature disentanglement, and cross-model latent alignment. Extensive evaluation on physical reasoning benchmarks—including VideoPhy and VBench—demonstrates substantial improvements in physical plausibility while preserving text-video alignment fidelity. A user study further confirms significant gains in perceived generation quality.

Technology Category

Application Category

📝 Abstract
Text-to-video diffusion models have enabled high-quality video synthesis, yet often fail to generate temporally coherent and physically plausible motion. A key reason is the models' insufficient understanding of complex motions that natural videos often entail. Recent works tackle this problem by aligning diffusion model features with those from pretrained video encoders. However, these encoders mix video appearance and dynamics into entangled features, limiting the benefit of such alignment. In this paper, we propose a motion-centric alignment framework that learns a disentangled motion subspace from a pretrained video encoder. This subspace is optimized to predict ground-truth optical flow, ensuring it captures true motion dynamics. We then align the latent features of a text-to-video diffusion model to this new subspace, enabling the generative model to internalize motion knowledge and generate more plausible videos. Our method improves the physical commonsense in a state-of-the-art video diffusion model, while preserving adherence to textual prompts, as evidenced by empirical evaluations on VideoPhy, VideoPhy2, VBench, and VBench-2.0, along with a user study.
Problem

Research questions and friction points this paper is trying to address.

Improving temporal coherence in video diffusion models
Enhancing physical plausibility of generated motions
Disentangling motion dynamics from appearance features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns disentangled motion subspace from encoder
Aligns diffusion features to motion subspace
Improves physical commonsense in video generation
🔎 Similar Papers
No similar papers found.