🤖 AI Summary
This work addresses the challenging problem of precise object displacement editing in static images—requiring joint modeling of illumination changes, perspective-aware pose adjustment, occlusion-aware inpainting, shadow/reflection consistency, and identity preservation. We propose the first method to formulate object relocation as a sequence-to-sequence generation task, leveraging temporal consistency priors from video diffusion models. To overcome data scarcity, we design a synthetic data pipeline using Unreal Engine, enabling controllable generation of physically grounded motion sequences. Further, we introduce a multi-task joint training framework encompassing motion prediction, image reconstruction, and photometric consistency supervision, enhancing generalization to real-world scenes. Our approach achieves state-of-the-art performance on complex natural images, demonstrating superior fidelity in illumination harmonization, occlusion completion, and spatiotemporal coherence of dynamic effects such as shadows and reflections.
📝 Abstract
Simple as it seems, moving an object to another location within an image is, in fact, a challenging image-editing task that requires re-harmonizing the lighting, adjusting the pose based on perspective, accurately filling occluded regions, and ensuring coherent synchronization of shadows and reflections while maintaining the object identity. In this paper, we present ObjectMover, a generative model that can perform object movement in highly challenging scenes. Our key insight is that we model this task as a sequence-to-sequence problem and fine-tune a video generation model to leverage its knowledge of consistent object generation across video frames. We show that with this approach, our model is able to adjust to complex real-world scenarios, handling extreme lighting harmonization and object effect movement. As large-scale data for object movement are unavailable, we construct a data generation pipeline using a modern game engine to synthesize high-quality data pairs. We further propose a multi-task learning strategy that enables training on real-world video data to improve the model generalization. Through extensive experiments, we demonstrate that ObjectMover achieves outstanding results and adapts well to real-world scenarios.