Princeton365: A Diverse Dataset with Accurate Camera Pose

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing SLAM benchmarks suffer from insufficient ground-truth accuracy, limited scene diversity, and poor cross-scene comparability; meanwhile, Novel View Synthesis (NVS) benchmarks lack coverage of challenging scenarios such as fully non-Lambertian objects and 360° camera trajectories. To address these limitations, we introduce a large-scale, multimodal SLAM dataset comprising 365 indoor/outdoor and object-scanning video sequences, each annotated with high-precision pose ground truth and synchronized monocular/stereo RGB and IMU data. We propose a joint spherical calibration board–checkerboard framework for accurate ground-truth acquisition, design a scene-scale-aware, optical-flow-driven pose error metric, and establish the first NVS benchmark explicitly covering fully non-Lambertian surfaces and 360° trajectories. Extensive experiments demonstrate that our dataset significantly improves cross-scene algorithmic comparability and failure-mode interpretability for both SLAM and NVS systems.

Technology Category

Application Category

📝 Abstract
We introduce Princeton365, a large-scale diverse dataset of 365 videos with accurate camera pose. Our dataset bridges the gap between accuracy and data diversity in current SLAM benchmarks by introducing a novel ground truth collection framework that leverages calibration boards and a 360-camera. We collect indoor, outdoor, and object scanning videos with synchronized monocular and stereo RGB video outputs as well as IMU. We further propose a new scene scale-aware evaluation metric for SLAM based on the the optical flow induced by the camera pose estimation error. In contrast to the current metrics, our new metric allows for comparison between the performance of SLAM methods across scenes as opposed to existing metrics such as Average Trajectory Error (ATE), allowing researchers to analyze the failure modes of their methods. We also propose a challenging Novel View Synthesis benchmark that covers cases not covered by current NVS benchmarks, such as fully non-Lambertian scenes with 360-degree camera trajectories. Please visit https://princeton365.cs.princeton.edu for the dataset, code, videos, and submission.
Problem

Research questions and friction points this paper is trying to address.

Bridges gap between accuracy and diversity in SLAM benchmarks
Introduces scene scale-aware metric for SLAM evaluation
Proposes challenging Novel View Synthesis benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel ground truth collection with calibration boards
Scene scale-aware SLAM evaluation metric
Challenging Novel View Synthesis benchmark
🔎 Similar Papers
No similar papers found.