Thinking with Camera: A Unified Multimodal Model for Camera-Centric Understanding and Generation

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the longstanding disjunction between understanding and generation in camera-centered multimodal tasks. To bridge this gap, we propose Puffin—a unified model that treats camera parameters (e.g., intrinsic matrices and extrinsic poses) as learnable linguistic tokens, enabling the first deep integration of photogeometric reasoning with language modeling. We introduce Puffin-4M, a large-scale dataset of 4 million vision–language–camera triplets, supporting fine-grained spatial perception. Puffin jointly leverages language-based regression, diffusion-based generation, pixel-wise camera map modeling, and instruction tuning to synergistically exploit both global camera parameters and local geometric cues. Experiments demonstrate that Puffin consistently outperforms specialized models on cross-view understanding and generation tasks, significantly enhancing generalization across downstream applications—including spatial imagination, embodied world exploration, and photographic guidance.

Technology Category

Application Category

📝 Abstract
Camera-centric understanding and generation are two cornerstones of spatial intelligence, yet they are typically studied in isolation. We present Puffin, a unified camera-centric multimodal model that extends spatial awareness along the camera dimension. Puffin integrates language regression and diffusion-based generation to interpret and create scenes from arbitrary viewpoints. To bridge the modality gap between cameras and vision-language, we introduce a novel paradigm that treats camera as language, enabling thinking with camera. This guides the model to align spatially grounded visual cues with photographic terminology while reasoning across geometric context. Puffin is trained on Puffin-4M, a large-scale dataset of 4 million vision-language-camera triplets. We incorporate both global camera parameters and pixel-wise camera maps, yielding flexible and reliable spatial generation. Experiments demonstrate Puffin superior performance over specialized models for camera-centric generation and understanding. With instruction tuning, Puffin generalizes to diverse cross-view tasks such as spatial imagination, world exploration, and photography guidance. We will release the code, models, dataset pipeline, and benchmark to advance multimodal spatial intelligence research.
Problem

Research questions and friction points this paper is trying to address.

Unifying camera-centric understanding and generation tasks
Bridging modality gaps between cameras and vision-language systems
Enabling spatial reasoning across diverse cross-view applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified multimodal model integrates camera-centric understanding and generation
Treats camera as language to bridge modality gap
Uses global parameters and pixel-wise maps for spatial generation