On the Generalization Capacities of MLLMs for Spatial Intelligence

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited cross-camera generalization of current multimodal large language models (MLLMs) that rely solely on RGB inputs, as their neglect of camera parameters leads to overfitting to the training camera distribution and poor performance on spatial tasks. To overcome this, the authors propose a camera-aware MLLM framework that explicitly embeds camera intrinsic parameters into visual tokens for the first time. The approach further incorporates camera-aware data augmentation and a geometry-aware distillation mechanism leveraging a 3D vision foundation model, effectively decoupling scene content from camera viewpoint. Experiments demonstrate that the proposed method significantly outperforms existing baselines in spatial localization and navigation under cross-camera settings, highlighting the critical role of explicitly modeling camera information in enhancing the spatial reasoning and generalization capabilities of MLLMs.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) that directly process RGB inputs for tasks like 3D localization and navigation have shown remarkable potential. However, we argue that these RGB-only approaches are fundamentally flawed in their ability to generalize across cameras. By ignoring camera parameters, they entangle an object's physical properties with the camera's perspective, creating an irresolvable ambiguity. We show this leads MLLMs to overfit to the training camera distribution, rather than learning true and generalizable 3D geometric principles. To address this, we propose Camera-Aware MLLM framework for spatial MLLMs. It learns generalizable spatial reasoning by: (i) injecting camera intrinsics via a dense embedding that conditions each visual token; (ii) introducing a camera-aware data augmentation strategy that synthetically varies camera parameters, forcing the model to disentangle camera properties from scene content; and (iii) distilling geometric priors from a 3D vision foundation model. Extensive experiments demonstrate that camera-aware MLLMs substantially outperform their naive counterparts, particularly in cross-camera generalization tests on spatially-grounded tasks, indicating that camera-awareness is not only beneficial but also a prerequisite for robust and generalizable spatial intelligence in MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
Spatial Intelligence
Camera Generalization
3D Localization
RGB-only Input
Innovation

Methods, ideas, or system contributions that make the work stand out.

Camera-Aware MLLM
spatial intelligence
camera intrinsics
cross-camera generalization
geometric priors
🔎 Similar Papers
No similar papers found.
G
Gongjie Zhang
DAMO Academy, Alibaba Group; HuPan Lab
Wenhao Li
Wenhao Li
Nanyang Technological University
Computer VisionDeep LearningVirtual Humans
Q
Quanhao Qian
DAMO Academy, Alibaba Group; HuPan Lab
J
Jiuniu Wang
DAMO Academy, Alibaba Group; HuPan Lab
Deli Zhao
Deli Zhao
Alibaba DAMO Academy
generative modelsmultimodal learningfoundation models
Shijian Lu
Shijian Lu
College of Computing and Data Science, NTU
Image and video analyticscomputer visionmachine learning
R
Ran Xu
DAMO Academy, Alibaba Group; HuPan Lab