Predicting Camera Pose from Perspective Descriptions for Spatial Reasoning

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge faced by current vision-language models in accurately interpreting 3D scenes from linguistic descriptions and generating novel viewpoints through multi-view spatial reasoning. To this end, the authors propose CAMCUE, a novel framework that explicitly injects camera pose as a geometric anchor into visual tokens, enabling direct mapping from language descriptions to target views without requiring time-consuming search or matching. The approach integrates pose-aware multi-image fusion, language-pose alignment, and pose-conditioned view synthesis, and introduces CAMCUE-DATA, a new dataset comprising 27,668 samples. Experiments demonstrate that CAMCUE improves overall accuracy by 9.06%, achieves over 90% rotation precision in pose prediction (with angular error <20°), reduces translation error below 0.5, and accelerates single-instance inference from 256.6 seconds to just 1.45 seconds.

Technology Category

Application Category

📝 Abstract
Multi-image spatial reasoning remains challenging for current multimodal large language models (MLLMs). While single-view perception is inherently 2D, reasoning over multiple views requires building a coherent scene understanding across viewpoints. In particular, we study perspective taking, where a model must build a coherent 3D understanding from multi-view observations and use it to reason from a new, language-specified viewpoint. We introduce CAMCUE, a pose-aware multi-image framework that uses camera pose as an explicit geometric anchor for cross-view fusion and novel-view reasoning. CAMCUE injects per-view pose into visual tokens, grounds natural-language viewpoint descriptions to a target camera pose, and synthesizes a pose-conditioned imagined target view to support answering. To support this setting, we curate CAMCUE-DATA with 27,668 training and 508 test instances pairing multi-view images and poses with diverse target-viewpoint descriptions and perspective-shift questions. We also include human-annotated viewpoint descriptions in the test split to evaluate generalization to human language. CAMCUE improves overall accuracy by 9.06% and predicts target poses from natural-language viewpoint descriptions with over 90% rotation accuracy within 20{\deg} and translation accuracy within a 0.5 error threshold. This direct grounding avoids expensive test-time search-and-match, reducing inference time from 256.6s to 1.45s per example and enabling fast, interactive use in real-world scenarios.
Problem

Research questions and friction points this paper is trying to address.

spatial reasoning
camera pose
multi-view perception
perspective taking
multimodal large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

camera pose estimation
multimodal spatial reasoning
pose-aware fusion
novel-view synthesis
perspective taking
🔎 Similar Papers
No similar papers found.