🤖 AI Summary
This work addresses the bottleneck in 4D dynamic scene reconstruction—its heavy reliance on large-scale annotated data and extensive model training—by proposing a fully learning-free approach requiring no training, fine-tuning, or auxiliary supervision. The core insight is the first identification and exploitation of motion cues implicitly encoded in the transformer attention maps of the pre-trained monocular stereo matching model DUSt3R. Through attention adaptation and motion-decoupled modeling, the method explicitly separates camera ego-motion from object-level dynamics. Consequently, it simultaneously achieves three tasks in an unsupervised manner: dynamic region segmentation, monocular camera pose estimation, and 4D dense point cloud reconstruction. Evaluated on real-world dynamic videos, it outperforms all state-of-the-art methods that require training or fine-tuning, while offering superior efficiency and generalization. The implementation is publicly available.
📝 Abstract
Recent advances in DUSt3R have enabled robust estimation of dense point clouds and camera parameters of static scenes, leveraging Transformer network architectures and direct supervision on large-scale 3D datasets. In contrast, the limited scale and diversity of available 4D datasets present a major bottleneck for training a highly generalizable 4D model. This constraint has driven conventional 4D methods to fine-tune 3D models on scalable dynamic video data with additional geometric priors such as optical flow and depths. In this work, we take an opposite path and introduce Easi3R, a simple yet efficient training-free method for 4D reconstruction. Our approach applies attention adaptation during inference, eliminating the need for from-scratch pre-training or network fine-tuning. We find that the attention layers in DUSt3R inherently encode rich information about camera and object motion. By carefully disentangling these attention maps, we achieve accurate dynamic region segmentation, camera pose estimation, and 4D dense point map reconstruction. Extensive experiments on real-world dynamic videos demonstrate that our lightweight attention adaptation significantly outperforms previous state-of-the-art methods that are trained or finetuned on extensive dynamic datasets. Our code is publicly available for research purpose at https://easi3r.github.io/