MobileI2V: Fast and High-Resolution Image-to-Video on Mobile Devices

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational overhead and low inference speed hindering image-to-video (I2V) generation on mobile devices, this work proposes the first real-time, high-definition I2V framework tailored for mobile platforms. Methodologically, we design a linear-mixing denoiser that synergistically integrates linear and Softmax attention mechanisms to balance efficiency and modeling capacity; introduce a two-stage temporal distillation strategy to reduce temporal modeling complexity; and customize mobile-optimized attention modules alongside a lightweight diffusion architecture. Experiments demonstrate that our framework achieves sub-100 ms per-frame generation latency at 720p resolution—over 10× faster than state-of-the-art methods—while maintaining video quality competitive with top-tier models. This work establishes the first practical solution for real-time, high-definition I2V generation on resource-constrained mobile devices, paving the way for edge-deployable generative vision applications.

Technology Category

Application Category

📝 Abstract
Recently, video generation has witnessed rapid advancements, drawing increasing attention to image-to-video (I2V) synthesis on mobile devices. However, the substantial computational complexity and slow generation speed of diffusion models pose significant challenges for real-time, high-resolution video generation on resource-constrained mobile devices. In this work, we propose MobileI2V, a 270M lightweight diffusion model for real-time image-to-video generation on mobile devices. The core lies in: (1) We analyzed the performance of linear attention modules and softmax attention modules on mobile devices, and proposed a linear hybrid architecture denoiser that balances generation efficiency and quality. (2) We design a time-step distillation strategy that compresses the I2V sampling steps from more than 20 to only two without significant quality loss, resulting in a 10-fold increase in generation speed. (3) We apply mobile-specific attention optimizations that yield a 2-fold speed-up for attention operations during on-device inference. MobileI2V enables, for the first time, fast 720p image-to-video generation on mobile devices, with quality comparable to existing models. Under one-step conditions, the generation speed of each frame of 720p video is less than 100 ms. Our code is available at: https://github.com/hustvl/MobileI2V.
Problem

Research questions and friction points this paper is trying to address.

Real-time high-resolution video generation on mobile devices
Reducing computational complexity of diffusion models
Accelerating image-to-video synthesis speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight diffusion model for mobile video generation
Time-step distillation reduces sampling steps significantly
Mobile-specific attention optimizations double inference speed
🔎 Similar Papers
No similar papers found.
S
Shuai Zhang
Huazhong University of Science and Technology
B
Bao Tang
Huazhong University of Science and Technology
S
Siyuan Yu
Huazhong University of Science and Technology
Y
Yueting Zhu
Huazhong University of Science and Technology
Jingfeng Yao
Jingfeng Yao
Huazhong University of Science and Technology
computer visiongenerative models
Y
Ya Zou
Huazhong University of Science and Technology
S
Shanglin Yuan
Huazhong University of Science and Technology
L
Li Yu
Huazhong University of Science and Technology
W
Wenyu Liu
Huazhong University of Science and Technology
Xinggang Wang
Xinggang Wang
Professor, Huazhong University of Science and Technology
Artificial IntelligenceComputer VisionAutonomous DrivingObject DetectionObject Segmentation