🤖 AI Summary
Existing video editing methods struggle to achieve cross-frame consistent 3D object compositing in static scenes with camera motion. This paper introduces the first generative-prior-based, video-level 3D object compositing editing framework—requiring no training. First, it leverages intermediate features from a video diffusion model to construct a shared 3D reconstruction space, enabling spatiotemporally consistent geometric modeling. Second, it directly edits object 3D poses within this space. Finally, edited 3D content is reprojected onto individual frames. By virtue of the shared 3D representation, the method inherently ensures inter-frame consistency and significantly outperforms image-level editing baselines. The core contribution lies in the first coupling of implicit generative priors with explicit 3D reconstruction—enabling zero-shot, high-fidelity, cross-frame consistent video 3D compositing editing.
📝 Abstract
Generative methods for image and video editing use generative models as priors to perform edits despite incomplete information, such as changing the composition of 3D objects shown in a single image. Recent methods have shown promising composition editing results in the image setting, but in the video setting, editing methods have focused on editing object's appearance and motion, or camera motion, and as a result, methods to edit object composition in videos are still missing. We propose
ame as a method for editing 3D object compositions in videos of static scenes with camera motion. Our approach allows editing the 3D position of a 3D object across all frames of a video in a temporally consistent manner. This is achieved by lifting intermediate features of a generative model to a 3D reconstruction that is shared between all frames, editing the reconstruction, and projecting the features on the edited reconstruction back to each frame. To the best of our knowledge, this is the first generative approach to edit object compositions in videos. Our approach is simple and training-free, while outperforming state-of-the-art image editing baselines.