π€ AI Summary
Existing indoor SLAM datasets lack architectural semantics and structural priors, limiting their utility for building-intelligence applications such as semantic mapping and structure-aware localization. To address this gap, we introduce the first open-source benchmark dataset that deeply integrates SLAM and Building Information Modeling (BIM). Grounded in the real-world Hong Kong University of Science and Technology Main Building, the dataset provides both design-phase BIM models and multi-session, multimodal (LiDAR, IMU, RGB-D, wheel odometry) as-built SLAM data, with precise spatiotemporal alignment between themβa novel contribution. Leveraging BIM lightweight decomposition and cross-modal temporal synchronization, the dataset enables rigorous evaluation of registration, pose estimation, and semantic mapping tasks. All raw data, annotations, and baseline results are publicly released, establishing the first standardized benchmark for SLAMβBIM co-modeling and advancing research at the intersection of robotics, computer vision, and digital construction.
π Abstract
Existing indoor SLAM datasets primarily focus on robot sensing, often lacking building architectures. To address this gap, we design and construct the first dataset to couple the SLAM and BIM, named SLABIM. This dataset provides BIM and SLAM-oriented sensor data, both modeling a university building at HKUST. The as-designed BIM is decomposed and converted for ease of use. We employ a multi-sensor suite for multi-session data collection and mapping to obtain the as-built model. All the related data are timestamped and organized, enabling users to deploy and test effectively. Furthermore, we deploy advanced methods and report the experimental results on three tasks: registration, localization and semantic mapping, demonstrating the effectiveness and practicality of SLABIM. We make our dataset open-source at https://github.com/HKUST-Aerial-Robotics/SLABIM.