🤖 AI Summary
Existing image-to-video (I2V) generation models struggle to preserve subject identity consistency—particularly when faces occupy a small region or undergo drastic expression and pose changes—despite humans’ high sensitivity to such inconsistencies, posing a critical challenge. This paper proposes IPRO, a reinforcement learning–based framework that employs a facial identity scorer as a reward signal and backpropagates gradients from the final sampling step of the diffusion model. To jointly enhance identity fidelity and training stability, IPRO innovatively introduces a multi-view facial feature pooling mechanism and KL-divergence regularization. Extensive evaluations on Wan 2.2 and a custom I2V model demonstrate substantial improvements in identity consistency across generated videos, with particularly notable gains in low-resolution and large-pose scenarios. The code and pretrained models are publicly released.
📝 Abstract
Recent advances in image-to-video (I2V) generation have achieved remarkable progress in synthesizing high-quality, temporally coherent videos from static images. Among all the applications of I2V, human-centric video generation includes a large portion. However, existing I2V models encounter difficulties in maintaining identity consistency between the input human image and the generated video, especially when the person in the video exhibits significant expression changes and movements. This issue becomes critical when the human face occupies merely a small fraction of the image. Since humans are highly sensitive to identity variations, this poses a critical yet under-explored challenge in I2V generation. In this paper, we propose Identity-Preserving Reward-guided Optimization (IPRO), a novel video diffusion framework based on reinforcement learning to enhance identity preservation. Instead of introducing auxiliary modules or altering model architectures, our approach introduces a direct and effective tuning algorithm that optimizes diffusion models using a face identity scorer. To improve performance and accelerate convergence, our method backpropagates the reward signal through the last steps of the sampling chain, enabling richer gradient feedback. We also propose a novel facial scoring mechanism that treats faces in ground-truth videos as facial feature pools, providing multi-angle facial information to enhance generalization. A KL-divergence regularization is further incorporated to stabilize training and prevent overfitting to the reward signal. Extensive experiments on Wan 2.2 I2V model and our in-house I2V model demonstrate the effectiveness of our method. Our project and code are available at href{https://ipro-alimama.github.io/}{https://ipro-alimama.github.io/}.