DreamLoop: Controllable Cinemagraph Generation from a Single Photograph

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel approach for generating controllable and seamlessly looping cinemagraphs from a single static image in general scenes, where existing methods struggle to achieve both high visual quality and intuitive motion control. Leveraging a general-purpose video diffusion model, the method introduces temporal bridging training and motion-path conditioning, augmented with start-end frame consistency constraints and static-region masking to guide user-specified motion areas. Notably, it requires no cinemagraph-specific training data and is the first to enable seamless looping with explicit motion control in unconstrained settings. The resulting cinemagraphs demonstrate significant improvements over prior art in both visual fidelity and flexibility of user-directed animation.

Technology Category

Application Category

📝 Abstract
Cinemagraphs, which combine static photographs with selective, looping motion, offer unique artistic appeal. Generating them from a single photograph in a controllable manner is particularly challenging. Existing image-animation techniques are restricted to simple, low-frequency motions and operate only in narrow domains with repetitive textures like water and smoke. In contrast, large-scale video diffusion models are not tailored for cinemagraph constraints and lack the specialized data required to generate seamless, controlled loops. We present DreamLoop, a controllable video synthesis framework dedicated to generating cinemagraphs from a single photo without requiring any cinemagraph training data. Our key idea is to adapt a general video diffusion model by training it on two objectives: temporal bridging and motion conditioning. This strategy enables flexible cinemagraph generation. During inference, by using the input image as both the first- and last- frame condition, we enforce a seamless loop. By conditioning on static tracks, we maintain a static background. Finally, by providing a user-specified motion path for a target object, our method provides intuitive control over the animation's trajectory and timing. To our knowledge, DreamLoop is the first method to enable cinemagraph generation for general scenes with flexible and intuitive controls. We demonstrate that our method produces high-quality, complex cinemagraphs that align with user intent, outperforming existing approaches.
Problem

Research questions and friction points this paper is trying to address.

cinemagraph
controllable animation
single-image animation
seamless loop
motion control
Innovation

Methods, ideas, or system contributions that make the work stand out.

cinemagraph generation
video diffusion model
controllable animation
temporal bridging
motion conditioning
🔎 Similar Papers
No similar papers found.