Feeling the Space: Egomotion-Aware Video Representation for Efficient and Accurate 3D Scene Understanding

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches to 3D scene understanding either rely on computationally expensive 3D representations or lack physical scale awareness, making it challenging to balance efficiency and accuracy. This work proposes Motion-MLLM, a lightweight framework that integrates video and IMU data to achieve absolute-scale-aware 3D understanding. The method introduces an IMU-visual joint keyframe selection mechanism and an asymmetric cross-modal attention fusion strategy mediated by motion tokens. Evaluated across multiple 3D tasks, Motion-MLLM matches or surpasses state-of-the-art performance while substantially reducing computational overhead, achieving 1.40× and 1.63× improvements in cost-effectiveness.

Technology Category

Application Category

📝 Abstract
Recent Multimodal Large Language Models (MLLMs) have shown high potential for spatial reasoning within 3D scenes. However, they typically rely on computationally expensive 3D representations like point clouds or reconstructed Bird's-Eye View (BEV) maps, or lack physical grounding to resolve ambiguities in scale and size. This paper significantly enhances MLLMs with egomotion modality data, captured by Inertial Measurement Units (IMUs) concurrently with the video. In particular, we propose a novel framework, called Motion-MLLM, introducing two key components: (1) a cascaded motion-visual keyframe filtering module that leverages both IMU data and visual features to efficiently select a sparse yet representative set of keyframes, and (2) an asymmetric cross-modal fusion module where motion tokens serve as intermediaries that channel egomotion cues and cross-frame visual context into the visual representation. By grounding visual content in physical egomotion trajectories, Motion-MLLM can reason about absolute scale and spatial relationships across the scene. Our extensive evaluation shows that Motion-MLLM makes significant improvements in various tasks related to 3D scene understanding and spatial reasoning. Compared to state-of-the-art (SOTA) methods based on video frames and explicit 3D data, Motion-MLLM exhibits similar or even higher accuracy with significantly less overhead (i.e., 1.40$\times$ and 1.63$\times$ higher cost-effectiveness, respectively).
Problem

Research questions and friction points this paper is trying to address.

3D scene understanding
spatial reasoning
multimodal large language models
egomotion
scale ambiguity
Innovation

Methods, ideas, or system contributions that make the work stand out.

egomotion-aware
motion-visual fusion
keyframe filtering
3D scene understanding
multimodal LLM
🔎 Similar Papers
No similar papers found.