🤖 AI Summary
Existing self-supervised methods for scene flow estimation on unlabeled point cloud sequences struggle to simultaneously model point cloud irregularity, capture long-range motion dependencies, and preserve geometric details. To address this, we propose a point-voxel dual-branch fusion architecture: the point branch incorporates an Umbrella Surface Feature Extraction (USFE) module to explicitly encode local geometric structure; the voxel branch employs sparse voxel grid attention with a shifted-window strategy to enhance large-displacement modeling capability, and leverages point convolution for cross-modal feature alignment. Jointly optimized via self-supervised photometric and geometric consistency constraints, our method achieves state-of-the-art performance on FlyingThings3D and KITTI benchmarks—reducing end-point error (EPE) by 10.52% on KITTI-s and 8.51% on KITTI-o over prior self-supervised approaches. To our knowledge, this is the first work to jointly achieve high accuracy, computational efficiency, and fine-grained detail preservation in unsupervised scene flow estimation.
📝 Abstract
Scene flow estimation aims to generate the 3D motion field of points between two consecutive frames of point clouds, which has wide applications in various fields. Existing point-based methods ignore the irregularity of point clouds and have difficulty capturing long-range dependencies due to the inefficiency of point-level computation. Voxel-based methods suffer from the loss of detail information. In this paper, we propose a point-voxel fusion method, where we utilize a voxel branch based on sparse grid attention and the shifted window strategy to capture long-range dependencies and a point branch to capture fine-grained features to compensate for the information loss in the voxel branch. In addition, since xyz coordinates are difficult to describe the geometric structure of complex 3D objects in the scene, we explicitly encode the local surface information of the point cloud through the umbrella surface feature extraction (USFE) module. We verify the effectiveness of our method by conducting experiments on the Flyingthings3D and KITTI datasets. Our method outperforms all other self-supervised methods and achieves highly competitive results compared to fully supervised methods. We achieve improvements in all metrics, especially EPE, which is reduced by 8.51% on the KITTIo dataset and 10.52% on the KITTIs dataset, respectively.