Rig3R: Rig-Aware Conditioning for Learned 3D Reconstruction

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for joint 3D reconstruction, camera pose estimation, and rigid-body (rig) structure discovery under multi-camera rigid mounts neglect rig geometry by treating images as unordered sets. Method: We propose rig-aware latent-space modeling—enabling both explicit geometric conditioning and implicit rig-structure inference—and design a bi-ray graph decoder that jointly predicts global camera poses and the rig center. Our end-to-end multi-view framework fuses camera IDs, timestamps, and rig pose metadata, and jointly decodes point clouds and bi-ray graphs. Results: Evaluated on real-world rig datasets, our method achieves state-of-the-art performance across all three tasks: 3D reconstruction, pose estimation, and rig discovery. It improves mean Average Accuracy (mAA) by 17–45% over prior work, attains optimal results in a single forward pass, and requires no post-processing.

Technology Category

Application Category

📝 Abstract
Estimating agent pose and 3D scene structure from multi-camera rigs is a central task in embodied AI applications such as autonomous driving. Recent learned approaches such as DUSt3R have shown impressive performance in multiview settings. However, these models treat images as unstructured collections, limiting effectiveness in scenarios where frames are captured from synchronized rigs with known or inferable structure. To this end, we introduce Rig3R, a generalization of prior multiview reconstruction models that incorporates rig structure when available, and learns to infer it when not. Rig3R conditions on optional rig metadata including camera ID, time, and rig poses to develop a rig-aware latent space that remains robust to missing information. It jointly predicts pointmaps and two types of raymaps: a pose raymap relative to a global frame, and a rig raymap relative to a rig-centric frame consistent across time. Rig raymaps allow the model to infer rig structure directly from input images when metadata is missing. Rig3R achieves state-of-the-art performance in 3D reconstruction, camera pose estimation, and rig discovery, outperforming both traditional and learned methods by 17-45% mAA across diverse real-world rig datasets, all in a single forward pass without post-processing or iterative refinement.
Problem

Research questions and friction points this paper is trying to address.

Estimating agent pose and 3D scene structure from multi-camera rigs
Incorporating rig structure into learned 3D reconstruction models
Inferring rig structure from images when metadata is missing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates rig metadata for reconstruction
Predicts pointmaps and two raymap types
Infers rig structure from images directly
🔎 Similar Papers
No similar papers found.