🤖 AI Summary
This work addresses the challenge of environmental modeling in first-person walking videos, where dense crowds and low camera angles lead to scenes cluttered with pedestrians that obstruct scene understanding. To tackle this, the authors propose a video inpainting approach leveraging semi-synthetic training data: diverse datasets are constructed by randomly compositing foreground pedestrians from real walking videos onto pedestrian-free backgrounds, augmented with photorealistic synthetic shadows. Building on this data, they fine-tune the Casper video diffusion model to jointly remove both pedestrians and their shadows. This study presents the first application of video diffusion models to dynamic pedestrian removal in complex urban environments, demonstrating significant performance gains over the original Casper model—particularly in high-density crowd scenarios and visually cluttered backgrounds—thereby enabling high-quality 3D/4D reconstruction of urban scenes.
📝 Abstract
Egocentric "walking tour" videos provide a rich source of image data to develop rich and diverse visual models of environments around the world. However, the significant presence of humans in frames of these videos due to crowds and eye-level camera perspectives mitigates their usefulness in environment modeling applications. We focus on addressing this challenge by developing a generative algorithm that can realistically remove (i.e., inpaint) humans and their associated shadow effects from walking tour videos. Key to our approach is the construction of a rich semi-synthetic dataset of video clip pairs to train this generative model. Each pair in the dataset consists of an environment-only background clip, and a composite clip of walking humans with simulated shadows overlaid on the background. We randomly sourced both foreground and background components from real egocentric walking tour videos around the world to maintain visual diversity. We then used this dataset to fine-tune the state-of-the-art Casper video diffusion model for object and effects inpainting, and demonstrate that the resulting model performs far better than Casper both qualitatively and quantitatively at removing humans from walking tour clips with significant human presence and complex backgrounds. Finally, we show that the resulting generated clips can be used to build successful 3D/4D models of urban locations.