Statistical Confidence Rescoring for Robust 3D Scene Graph Generation from Multi-View Images

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of robust 3D semantic scene graph generation from multi-view RGB images without 3D ground-truth annotations, this paper proposes an end-to-end framework. First, geometric priors are obtained via multi-view depth estimation and pseudo-point cloud reconstruction. Second, semantic masks guide cross-view feature aggregation to suppress background noise. Third, topological relationships among nodes and edges within one-hop neighborhoods are explicitly modeled, and a statistical-prior-based confidence rescaling mechanism jointly optimizes object, predicate, and relation predictions. Crucially, the method operates entirely without 3D supervision. Experiments demonstrate significant improvements in both accuracy and structural stability of 3D scene graphs, outperforming state-of-the-art unsupervised and weakly supervised approaches on mainstream benchmarks.

Technology Category

Application Category

📝 Abstract
Modern 3D semantic scene graph estimation methods utilize ground truth 3D annotations to accurately predict target objects, predicates, and relationships. In the absence of given 3D ground truth representations, we explore leveraging only multi-view RGB images to tackle this task. To attain robust features for accurate scene graph estimation, we must overcome the noisy reconstructed pseudo point-based geometry from predicted depth maps and reduce the amount of background noise present in multi-view image features. The key is to enrich node and edge features with accurate semantic and spatial information and through neighboring relations. We obtain semantic masks to guide feature aggregation to filter background features and design a novel method to incorporate neighboring node information to aid robustness of our scene graph estimates. Furthermore, we leverage on explicit statistical priors calculated from the training summary statistics to refine node and edge predictions based on their one-hop neighborhood. Our experiments show that our method outperforms current methods purely using multi-view images as the initial input. Our project page is available at https://qixun1.github.io/projects/SCRSSG.
Problem

Research questions and friction points this paper is trying to address.

Generate 3D scene graphs without 3D ground truth annotations
Overcome noisy geometry from predicted depth maps
Reduce background noise in multi-view image features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging multi-view RGB images only
Semantic masks guide feature aggregation
Statistical priors refine node predictions
🔎 Similar Papers
No similar papers found.