Urban 3D Change Detection Using LiDAR Sensor for HD Map Maintenance and Smart Mobility

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Urban-scale LiDAR multi-temporal 3D change detection faces key challenges: sensitivity to minor vertical discrepancies, terrain slope, viewpoint misalignment; high memory consumption; reliance on pre-alignment; degradation of thin structures; and semantic class inconsistency. To address these, we propose an object-centric, uncertainty-aware detection framework. Our core contributions are: (1) joint multi-resolution Normal Distributions Transform (NDT) and point-to-plane ICP registration; (2) confidence quantification leveraging registration covariance and surface roughness; (3) semantic instance segmentation-guided class-constrained bipartite matching to enforce class-count consistency; and (4) geometry-prior-driven cross-epoch instance association with multi-dimensional feature fusion—including 3D overlap ratio, normal displacement, elevation/volume differences, and histogram distance—integrated via gated decision learning. Evaluated across 15 urban blocks, our method achieves 95.2% accuracy, 90.4% mF1, and 82.6% mIoU, with class-specific change IoU improving by 7.6 percentage points to 74.8%, significantly outperforming Triplet KPConv.

Technology Category

Application Category

📝 Abstract
High-definition 3D city maps underpin smart transportation, digital twins, and autonomous driving, where object level change detection across bi temporal LiDAR enables HD map maintenance, construction monitoring, and reliable localization. Classical DSM differencing and image based methods are sensitive to small vertical bias, ground slope, and viewpoint mismatch and yield cellwise outputs without object identity. Point based neural models and voxel encodings demand large memory, assume near perfect pre alignment, degrade thin structures, and seldom enforce class consistent association, which leaves split or merge cases unresolved and ignores uncertainty. We propose an object centric, uncertainty aware pipeline for city scale LiDAR that aligns epochs with multi resolution NDT followed by point to plane ICP, normalizes height, and derives a per location level of detection from registration covariance and surface roughness to calibrate decisions and suppress spurious changes. Geometry only proxies seed cross epoch associations that are refined by semantic and instance segmentation and a class constrained bipartite assignment with augmented dummies to handle splits and merges while preserving per class counts. Tiled processing bounds memory without eroding narrow ground changes, and instance level decisions combine 3D overlap, normal direction displacement, and height and volume differences with a histogram distance, all gated by the local level of detection to remain stable under partial overlap and sampling variation. On 15 representative Subiaco blocks the method attains 95.2% accuracy, 90.4% mF1, and 82.6% mIoU, exceeding Triplet KPConv by 0.2 percentage points in accuracy, 0.2 in mF1, and 0.8 in mIoU, with the largest gain on Decreased where IoU reaches 74.8% and improves by 7.6 points.
Problem

Research questions and friction points this paper is trying to address.

Detecting 3D urban changes from LiDAR for HD map maintenance and smart mobility
Addressing limitations of classical methods sensitive to alignment errors and viewpoint mismatches
Resolving object split/merge cases while handling registration uncertainty in city-scale data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Object-centric uncertainty-aware pipeline for city-scale LiDAR
Geometry proxies seed cross-epoch semantic instance associations
Tiled processing with local detection level gates decisions
🔎 Similar Papers
No similar papers found.
Hezam Albaqami
Hezam Albaqami
Department of Computer Science & Artificial Intelligence, University of Jeddah
Artificial IntelligenceMachine LearningPattern RecognitionBioinformatics
Haitian Wang
Haitian Wang
University of Western Australia
3D point cloudComputer visionMachine leaningIoTRemote sensing
X
Xinyu Wang
Department of Computer Science and Software Engineering, University of Western Australia, Perth, WA 6009, Australia
M
Muhammad Ibrahim
Department of Computer Science and Software Engineering, University of Western Australia, Perth, WA 6009, Australia
Zainy M. Malakan
Zainy M. Malakan
Umm Al-Qura University, College of Computing, Department of Data Science
Computer VisionHand GesturesDeep LearningAI
A
Abdullah M. Alqamdi
Department of Computer Science and Artificial Intelligence, College of Computer Science and Engineering, University of Jeddah, Jeddah 21493, Saudi Arabia
M
Mohammed H. Alghamdi
Department of Information and Technology Systems, College of Computer Science and Engineering, University of Jeddah, Jeddah 21493, Saudi Arabia; Department of Informatics and Computer Systems, College of Computer Science, King Khalid University, Abha, Saudi Arabia
A
Ajmal Mian
Department of Computer Science and Software Engineering, University of Western Australia, Perth, WA 6009, Australia