The Oxford Spires Dataset: Benchmarking Large-Scale LiDAR-Visual Localisation, Reconstruction and Radiance Field Methods

📅 2024-11-15
🏛️ arXiv.org
📈 Citations: 10
Influential: 2
📄 PDF
🤖 AI Summary
This work addresses the challenges of evaluating generalization and accuracy of LiDAR-visual localization, 3D reconstruction, and radiance field methods (e.g., NeRF, 3D Gaussian Splatting) in large-scale multimodal settings. To this end, we introduce the first high-precision LiDAR–vision–TLS ground-truth benchmark dataset covering Oxford landmarks, featuring synchronized trinocular cameras, vehicle-mounted 3D LiDAR, IMU, and terrestrial laser scanning (TLS) point clouds. We conduct the first systematic evaluation of radiance field methods under cross-trajectory pose generalization, revealing critical limitations: overfitting to training views, sharp degradation in long-range reconstruction quality, and significantly lower 3D geometric accuracy compared to multi-view stereo (MVS). We propose a LiDAR–vision joint registration pipeline and a standardized cross-trajectory generalization evaluation protocol. The dataset, calibration tools, and unified evaluation framework are publicly released to foster deeper integration of radiance fields with SLAM and 3D reconstruction systems.

Technology Category

Application Category

📝 Abstract
This paper introduces a large-scale multi-modal dataset captured in and around well-known landmarks in Oxford using a custom-built multi-sensor perception unit as well as a millimetre-accurate map from a Terrestrial LiDAR Scanner (TLS). The perception unit includes three synchronised global shutter colour cameras, an automotive 3D LiDAR scanner, and an inertial sensor - all precisely calibrated. We also establish benchmarks for tasks involving localisation, reconstruction, and novel-view synthesis, which enable the evaluation of Simultaneous Localisation and Mapping (SLAM) methods, Structure-from-Motion (SfM) and Multi-view Stereo (MVS) methods as well as radiance field methods such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting. To evaluate 3D reconstruction the TLS 3D models are used as ground truth. Localisation ground truth is computed by registering the mobile LiDAR scans to the TLS 3D models. Radiance field methods are evaluated not only with poses sampled from the input trajectory, but also from viewpoints that are from trajectories which are distant from the training poses. Our evaluation demonstrates a key limitation of state-of-the-art radiance field methods: we show that they tend to overfit to the training poses/images and do not generalise well to out-of-sequence poses. They also underperform in 3D reconstruction compared to MVS systems using the same visual inputs. Our dataset and benchmarks are intended to facilitate better integration of radiance field methods and SLAM systems. The raw and processed data, along with software for parsing and evaluation, can be accessed at https://dynamic.robots.ox.ac.uk/datasets/oxford-spires/.
Problem

Research questions and friction points this paper is trying to address.

Introducing a large-scale multi-modal dataset for LiDAR-visual localization and reconstruction
Establishing benchmarks for SLAM, SfM, MVS and radiance field methods evaluation
Addressing radiance field methods' overfitting and poor generalization limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Custom-built multi-sensor perception unit
Millimetre-accurate TLS map as ground truth
Benchmarks for SLAM SfM MVS NeRF
🔎 Similar Papers
No similar papers found.
Yifu Tao
Yifu Tao
University of Oxford
3D ReconstructionComputer VisionRobotics
M
Miguel 'Angel Munoz-Ban'on
Oxford Robotics Inst., Dept. of Eng. Science, Univ. of Oxford, UK; Group of Automation, Robotics and Computer Vision (AUROVA), University of Alicante, Spain
Lintong Zhang
Lintong Zhang
Oxford Robotics Inst., Dept. of Eng. Science, Univ. of Oxford, UK
J
Jiahao Wang
Oxford Robotics Inst., Dept. of Eng. Science, Univ. of Oxford, UK
L
L. Fu
Oxford Robotics Inst., Dept. of Eng. Science, Univ. of Oxford, UK
M
Maurice F. Fallon
Oxford Robotics Inst., Dept. of Eng. Science, Univ. of Oxford, UK