🤖 AI Summary
To address the high computational overhead and low inference speed hindering image-to-video (I2V) generation on mobile devices, this work proposes the first real-time, high-definition I2V framework tailored for mobile platforms. Methodologically, we design a linear-mixing denoiser that synergistically integrates linear and Softmax attention mechanisms to balance efficiency and modeling capacity; introduce a two-stage temporal distillation strategy to reduce temporal modeling complexity; and customize mobile-optimized attention modules alongside a lightweight diffusion architecture. Experiments demonstrate that our framework achieves sub-100 ms per-frame generation latency at 720p resolution—over 10× faster than state-of-the-art methods—while maintaining video quality competitive with top-tier models. This work establishes the first practical solution for real-time, high-definition I2V generation on resource-constrained mobile devices, paving the way for edge-deployable generative vision applications.
📝 Abstract
Recently, video generation has witnessed rapid advancements, drawing increasing attention to image-to-video (I2V) synthesis on mobile devices. However, the substantial computational complexity and slow generation speed of diffusion models pose significant challenges for real-time, high-resolution video generation on resource-constrained mobile devices. In this work, we propose MobileI2V, a 270M lightweight diffusion model for real-time image-to-video generation on mobile devices. The core lies in: (1) We analyzed the performance of linear attention modules and softmax attention modules on mobile devices, and proposed a linear hybrid architecture denoiser that balances generation efficiency and quality. (2) We design a time-step distillation strategy that compresses the I2V sampling steps from more than 20 to only two without significant quality loss, resulting in a 10-fold increase in generation speed. (3) We apply mobile-specific attention optimizations that yield a 2-fold speed-up for attention operations during on-device inference. MobileI2V enables, for the first time, fast 720p image-to-video generation on mobile devices, with quality comparable to existing models. Under one-step conditions, the generation speed of each frame of 720p video is less than 100 ms. Our code is available at: https://github.com/hustvl/MobileI2V.