With Great Context Comes Great Prediction Power: Classifying Objects via Geo-Semantic Scene Graphs

📅 2025-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Human object recognition leverages rich scene context—including spatial relationships, material properties, and object co-occurrence—whereas mainstream methods operate on isolated image regions. This work introduces the Geometric-Semantic Scene Graph (GSCG), the first framework to jointly integrate metric depth estimation with panoptic-material segmentation, explicitly modeling object geometry, material, color, and spatial relations. A context-aware graph neural network is then built upon this representation for interpretable, context-driven classification. Key contributions are: (1) a structured, interpretable Geometric-Semantic Contextual Graph representation for holistic scene understanding; and (2) a paradigm shift from isolated region recognition to contextualized, relational inference. Evaluated on COCO 2017, GSCG achieves 73.4% classification accuracy—surpassing context-agnostic baselines (+35.0%), fine-tuned ResNet-50 (+19.9%), and the multimodal LLM Llama-4 Scout (+31.1%).

Technology Category

Application Category

📝 Abstract
Humans effortlessly identify objects by leveraging a rich understanding of the surrounding scene, including spatial relationships, material properties, and the co-occurrence of other objects. In contrast, most computational object recognition systems operate on isolated image regions, devoid of meaning in isolation, thus ignoring this vital contextual information. This paper argues for the critical role of context and introduces a novel framework for contextual object classification. We first construct a Geo-Semantic Contextual Graph (GSCG) from a single monocular image. This rich, structured representation is built by integrating a metric depth estimator with a unified panoptic and material segmentation model. The GSCG encodes objects as nodes with detailed geometric, chromatic, and material attributes, and their spatial relationships as edges. This explicit graph structure makes the model's reasoning process inherently interpretable. We then propose a specialized graph-based classifier that aggregates features from a target object, its immediate neighbors, and the global scene context to predict its class. Through extensive ablation studies, we demonstrate that our context-aware model achieves a classification accuracy of 73.4%, dramatically outperforming context-agnostic versions (as low as 38.4%). Furthermore, our GSCG-based approach significantly surpasses strong baselines, including fine-tuned ResNet models (max 53.5%) and a state-of-the-art multimodal Large Language Model (LLM), Llama 4 Scout, which, even when given the full image alongside a detailed description of objects, maxes out at 42.3%. These results on COCO 2017 train/val splits highlight the superiority of explicitly structured and interpretable context for object recognition tasks.
Problem

Research questions and friction points this paper is trying to address.

Classifying objects using geo-semantic scene graphs from images
Integrating depth, panoptic segmentation, and material data for context
Improving accuracy over context-agnostic and baseline models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructs Geo-Semantic Contextual Graph from monocular images
Integrates depth estimation with panoptic and material segmentation
Uses graph classifier aggregating local and global context features
🔎 Similar Papers
No similar papers found.