Object-Centric Representation Learning for Enhanced 3D Scene Graph Prediction

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current 3D semantic scene graph prediction methods rely on graph neural networks but suffer from insufficient discriminability and representational capacity in object and relational feature encoding. To address this, we propose a decoupled representation learning framework: first, a highly discriminative object feature encoder is designed, integrating geometric-semantic multimodal fusion; second, an object-centric contrastive pre-training strategy is introduced to explicitly decouple object representation learning from graph structure prediction. Crucially, our method requires no architectural modifications to downstream graph inference modules and can be seamlessly integrated as a plug-in enhancement. Evaluated on the 3DSSG benchmark, our approach significantly outperforms state-of-the-art methods, achieving consistent improvements in both object classification and relationship prediction—the two core evaluation metrics—thereby validating the effectiveness of decoupled representation learning for 3D scene semantic understanding.

Technology Category

Application Category

📝 Abstract
3D Semantic Scene Graph Prediction aims to detect objects and their semantic relationships in 3D scenes, and has emerged as a crucial technology for robotics and AR/VR applications. While previous research has addressed dataset limitations and explored various approaches including Open-Vocabulary settings, they frequently fail to optimize the representational capacity of object and relationship features, showing excessive reliance on Graph Neural Networks despite insufficient discriminative capability. In this work, we demonstrate through extensive analysis that the quality of object features plays a critical role in determining overall scene graph accuracy. To address this challenge, we design a highly discriminative object feature encoder and employ a contrastive pretraining strategy that decouples object representation learning from the scene graph prediction. This design not only enhances object classification accuracy but also yields direct improvements in relationship prediction. Notably, when plugging in our pretrained encoder into existing frameworks, we observe substantial performance improvements across all evaluation metrics. Additionally, whereas existing approaches have not fully exploited the integration of relationship information, we effectively combine both geometric and semantic features to achieve superior relationship prediction. Comprehensive experiments on the 3DSSG dataset demonstrate that our approach significantly outperforms previous state-of-the-art methods. Our code is publicly available at https://github.com/VisualScienceLab-KHU/OCRL-3DSSG-Codes.
Problem

Research questions and friction points this paper is trying to address.

Enhancing object feature discriminative capability for 3D scene graphs
Decoupling object representation learning from relationship prediction
Integrating geometric and semantic features for relationship prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Object feature encoder with enhanced discriminative capability
Contrastive pretraining strategy decouples representation learning
Integration of geometric and semantic features for relationships
🔎 Similar Papers
No similar papers found.
K
KunHo Heo
Kyung Hee University
G
GiHyun Kim
Kyung Hee University
S
SuYeon Kim
Kyung Hee University
MyeongAh Cho
MyeongAh Cho
Assistant Professor, Kyung Hee University
Computer VisionVideo ProcessingDeep LearningPattern Recognition