Mobius: Text to Seamless Looping Video Generation via Latent Shift

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing text-to-loop video generation methods, which rely on user annotations, image templates, or model fine-tuning. We propose a zero-training, purely text-driven approach for seamless loop video synthesis. Our method operates end-to-end on a pre-trained video latent diffusion model and introduces a novel arbitrary-length cyclic construction mechanism in latent space: it progressively shifts the latent representation of the first frame while enforcing multi-frame collaborative denoising and explicit temporal consistency constraints. Crucially, we devise the “latent shift” strategy—requiring no additional training—to ensure both visual and motion continuity between the loop’s start and end frames. Experiments demonstrate state-of-the-art performance in motion richness, loop smoothness, and visual fidelity, significantly outperforming baseline approaches. The framework enables high-quality, dynamic, and arbitrarily long text-conditioned loop video generation.

Technology Category

Application Category

📝 Abstract
We present Mobius, a novel method to generate seamlessly looping videos from text descriptions directly without any user annotations, thereby creating new visual materials for the multi-media presentation. Our method repurposes the pre-trained video latent diffusion model for generating looping videos from text prompts without any training. During inference, we first construct a latent cycle by connecting the starting and ending noise of the videos. Given that the temporal consistency can be maintained by the context of the video diffusion model, we perform multi-frame latent denoising by gradually shifting the first-frame latent to the end in each step. As a result, the denoising context varies in each step while maintaining consistency throughout the inference process. Moreover, the latent cycle in our method can be of any length. This extends our latent-shifting approach to generate seamless looping videos beyond the scope of the video diffusion model's context. Unlike previous cinemagraphs, the proposed method does not require an image as appearance, which will restrict the motions of the generated results. Instead, our method can produce more dynamic motion and better visual quality. We conduct multiple experiments and comparisons to verify the effectiveness of the proposed method, demonstrating its efficacy in different scenarios. All the code will be made available.
Problem

Research questions and friction points this paper is trying to address.

Generate looping videos from text descriptions
Maintain temporal consistency in video generation
Produce dynamic motion without user annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent Shift Technique
No Training Required
Dynamic Motion Generation
🔎 Similar Papers
No similar papers found.