🤖 AI Summary
Existing video trajectory editing methods struggle to simultaneously achieve precise camera control and long-term temporal consistency under large camera motions and inpainting of unseen regions. This work proposes a hybrid warping strategy that explicitly separates dynamic and static content: static regions are rendered according to the target camera pose via an incrementally updated world cache, while dynamic content is directly warped. A history-guided autoregressive diffusion model jointly optimizes video clips to ensure coherence. The approach achieves globally consistent, temporally coherent high-quality edits, significantly outperforming existing methods on the newly introduced iPhone-PTZ benchmark. It supports diverse and complex camera trajectories and enables efficient synthesis with fewer parameters.
📝 Abstract
Video (camera) trajectory editing aims to synthesize new videos that follow user-defined camera paths while preserving scene content and plausibly inpainting previously unseen regions, upgrading amateur footage into professionally styled videos. Existing VTE methods struggle with precise camera control and long-range consistency because they either inject target poses through a limited-capacity embedding or rely on single-frame warping with only implicit cross-frame aggregation in video diffusion models. To address these issues, we introduce a new VTE framework that 1) explicitly aggregates information across the entire source video via a hybrid warping scheme. Specifically, static regions are progressively fused into a world cache then rendered to target camera poses, while dynamic regions are directly warped; their fusion yields globally consistent coarse frames that guide refinement. 2) processes video segments jointly with their history via a history-guided autoregressive diffusion model, while the world cache is incrementally updated to reinforce already inpainted content, enabling long-term temporal coherence. Finally, we present iPhone-PTZ, a new VTE benchmark with diverse camera motions and large trajectory variations, and achieve state-of-the-art performance with fewer parameters.