Identity-Preserving Image-to-Video Generation via Reward-Guided Optimization

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image-to-video (I2V) generation models struggle to preserve subject identity consistency—particularly when faces occupy a small region or undergo drastic expression and pose changes—despite humans’ high sensitivity to such inconsistencies, posing a critical challenge. This paper proposes IPRO, a reinforcement learning–based framework that employs a facial identity scorer as a reward signal and backpropagates gradients from the final sampling step of the diffusion model. To jointly enhance identity fidelity and training stability, IPRO innovatively introduces a multi-view facial feature pooling mechanism and KL-divergence regularization. Extensive evaluations on Wan 2.2 and a custom I2V model demonstrate substantial improvements in identity consistency across generated videos, with particularly notable gains in low-resolution and large-pose scenarios. The code and pretrained models are publicly released.

Technology Category

Application Category

📝 Abstract
Recent advances in image-to-video (I2V) generation have achieved remarkable progress in synthesizing high-quality, temporally coherent videos from static images. Among all the applications of I2V, human-centric video generation includes a large portion. However, existing I2V models encounter difficulties in maintaining identity consistency between the input human image and the generated video, especially when the person in the video exhibits significant expression changes and movements. This issue becomes critical when the human face occupies merely a small fraction of the image. Since humans are highly sensitive to identity variations, this poses a critical yet under-explored challenge in I2V generation. In this paper, we propose Identity-Preserving Reward-guided Optimization (IPRO), a novel video diffusion framework based on reinforcement learning to enhance identity preservation. Instead of introducing auxiliary modules or altering model architectures, our approach introduces a direct and effective tuning algorithm that optimizes diffusion models using a face identity scorer. To improve performance and accelerate convergence, our method backpropagates the reward signal through the last steps of the sampling chain, enabling richer gradient feedback. We also propose a novel facial scoring mechanism that treats faces in ground-truth videos as facial feature pools, providing multi-angle facial information to enhance generalization. A KL-divergence regularization is further incorporated to stabilize training and prevent overfitting to the reward signal. Extensive experiments on Wan 2.2 I2V model and our in-house I2V model demonstrate the effectiveness of our method. Our project and code are available at href{https://ipro-alimama.github.io/}{https://ipro-alimama.github.io/}.
Problem

Research questions and friction points this paper is trying to address.

Maintaining identity consistency in image-to-video generation
Addressing identity loss with facial movements and expressions
Preserving identity when human faces occupy small image areas
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reward-guided optimization for identity preservation
Backpropagates reward through sampling chain steps
Uses ground-truth videos as facial feature pools
🔎 Similar Papers
No similar papers found.