๐ค AI Summary
Existing static feed-forward scene reconstruction methods suffer from poor generalization and fail to model dynamic content effectively. To address this, we propose the first motion-aware feed-forward framework for dynamic scene reconstruction, enabling real-time bullet-time rendering and novel-view synthesis from monocular video input. Our approach employs a 3D Gaussian splatting representation integrated with a cross-frame spatiotemporal aggregation mechanism, jointly modeling static backgrounds and dynamic foregrounds without iterative optimization. The model processes monocular video end-to-end and reconstructs the entire scene within 150 msโsignificantly outperforming optimization-based methods in speed. It achieves state-of-the-art performance on both static and dynamic benchmarks, delivering strong generalization, high-fidelity reconstruction, and millisecond-level inference latency.
๐ Abstract
Recent advancements in static feed-forward scene reconstruction have demonstrated significant progress in high-quality novel view synthesis. However, these models often struggle with generalizability across diverse environments and fail to effectively handle dynamic content. We present BTimer (short for BulletTimer), the first motion-aware feed-forward model for real-time reconstruction and novel view synthesis of dynamic scenes. Our approach reconstructs the full scene in a 3D Gaussian Splatting representation at a given target ('bullet') timestamp by aggregating information from all the context frames. Such a formulation allows BTimer to gain scalability and generalization by leveraging both static and dynamic scene datasets. Given a casual monocular dynamic video, BTimer reconstructs a bullet-time scene within 150ms while reaching state-of-the-art performance on both static and dynamic scene datasets, even compared with optimization-based approaches.