🤖 AI Summary
This work addresses the limited cross-camera generalization of current multimodal large language models (MLLMs) that rely solely on RGB inputs, as their neglect of camera parameters leads to overfitting to the training camera distribution and poor performance on spatial tasks. To overcome this, the authors propose a camera-aware MLLM framework that explicitly embeds camera intrinsic parameters into visual tokens for the first time. The approach further incorporates camera-aware data augmentation and a geometry-aware distillation mechanism leveraging a 3D vision foundation model, effectively decoupling scene content from camera viewpoint. Experiments demonstrate that the proposed method significantly outperforms existing baselines in spatial localization and navigation under cross-camera settings, highlighting the critical role of explicitly modeling camera information in enhancing the spatial reasoning and generalization capabilities of MLLMs.
📝 Abstract
Multimodal Large Language Models (MLLMs) that directly process RGB inputs for tasks like 3D localization and navigation have shown remarkable potential. However, we argue that these RGB-only approaches are fundamentally flawed in their ability to generalize across cameras. By ignoring camera parameters, they entangle an object's physical properties with the camera's perspective, creating an irresolvable ambiguity. We show this leads MLLMs to overfit to the training camera distribution, rather than learning true and generalizable 3D geometric principles. To address this, we propose Camera-Aware MLLM framework for spatial MLLMs. It learns generalizable spatial reasoning by: (i) injecting camera intrinsics via a dense embedding that conditions each visual token; (ii) introducing a camera-aware data augmentation strategy that synthetically varies camera parameters, forcing the model to disentangle camera properties from scene content; and (iii) distilling geometric priors from a 3D vision foundation model. Extensive experiments demonstrate that camera-aware MLLMs substantially outperform their naive counterparts, particularly in cross-camera generalization tests on spatially-grounded tasks, indicating that camera-awareness is not only beneficial but also a prerequisite for robust and generalizable spatial intelligence in MLLMs.