🤖 AI Summary
Existing 3D car datasets are predominantly synthetic or low-quality, limiting high-fidelity reconstruction and understanding in real-world scenarios. To address this, we introduce RealCar-3D—the first large-scale, real-world 3D car dataset—comprising 2,500 vehicles across diverse brands, each captured with ~200 high-resolution, 360° RGB-D frames and accurate point clouds. Our methodology features multi-view dense sampling, controlled acquisition under reflective, standard, and low-light conditions, background removal, and coordinate normalization—enabling, for the first time, high-volume, high-fidelity, and high-diversity 3D car data collection in realistic settings. The dataset delivers standardized, background-free, axis-aligned, fine-grained vehicle-part–segmented point clouds. It significantly improves 3D reconstruction quality under standard illumination and exposes critical performance bottlenecks of current methods under reflective and low-light conditions. RealCar-3D establishes a new benchmark for both 2D and 3D automotive perception tasks.
📝 Abstract
3D cars are commonly used in self-driving systems, virtual/augmented reality, and games. However, existing 3D car datasets are either synthetic or low-quality, presenting a significant gap toward the high-quality real-world 3D car datasets and limiting their applications in practical scenarios. In this paper, we propose the first large-scale 3D real car dataset, termed 3DRealCar, offering three distinctive features. (1) extbf{High-Volume}: 2,500 cars are meticulously scanned by 3D scanners, obtaining car images and point clouds with real-world dimensions; (2) extbf{High-Quality}: Each car is captured in an average of 200 dense, high-resolution 360-degree RGB-D views, enabling high-fidelity 3D reconstruction; (3) extbf{High-Diversity}: The dataset contains various cars from over 100 brands, collected under three distinct lighting conditions, including reflective, standard, and dark. Additionally, we offer detailed car parsing maps for each instance to promote research in car parsing tasks. Moreover, we remove background point clouds and standardize the car orientation to a unified axis for the reconstruction only on cars without background and controllable rendering. We benchmark 3D reconstruction results with state-of-the-art methods across each lighting condition in 3DRealCar. Extensive experiments demonstrate that the standard lighting condition part of 3DRealCar can be used to produce a large number of high-quality 3D cars, improving various 2D and 3D tasks related to cars. Notably, our dataset brings insight into the fact that recent 3D reconstruction methods face challenges in reconstructing high-quality 3D cars under reflective and dark lighting conditions. extcolor{red}{href{https://xiaobiaodu.github.io/3drealcar/}{Our dataset is available here.}}