π€ AI Summary
Current multimodal large language models (MLLMs) lack fine-grained, structured alignment between pixel-level visual features and textual semantics, limiting their scene understanding and interactive capabilities in embodied intelligence. To address this, we propose SGClipβthe first open-domain, annotation-free video scene graph generation model. Built upon the CLIP architecture and a neuro-symbolic learning framework, SGClip employs self-supervised training on video-caption pairs to achieve spatiotemporal structured perception and semantic alignment. It supports prompt-driven reasoning and downstream task fine-tuning. SGClip achieves state-of-the-art performance on multiple scene graph generation and action localization benchmarks. Experiments demonstrate that it significantly reduces perceptual errors, enabling open-source MLLMs to surpass closed-source baselines in two embodied environments.
π Abstract
Multi-modal large language models (MLLMs) are making rapid progress toward general-purpose embodied agents. However, current training pipelines primarily rely on high-level vision-sound-text pairs and lack fine-grained, structured alignment between pixel-level visual content and textual semantics. To overcome this challenge, we propose ESCA, a new framework for contextualizing embodied agents through structured spatial-temporal understanding. At its core is SGClip, a novel CLIP-based, open-domain, and promptable model for generating scene graphs. SGClip is trained on 87K+ open-domain videos via a neurosymbolic learning pipeline, which harnesses model-driven self-supervision from video-caption pairs and structured reasoning, thereby eliminating the need for human-labeled scene graph annotations. We demonstrate that SGClip supports both prompt-based inference and task-specific fine-tuning, excelling in scene graph generation and action localization benchmarks. ESCA with SGClip consistently improves both open-source and commercial MLLMs, achieving state-of-the-art performance across two embodied environments. Notably, it significantly reduces agent perception errors and enables open-source models to surpass proprietary baselines.