Radiometric fingerprinting of object surfaces using mobile laser scanning and semantic 3D road space models

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited representation of surface material properties in existing semantic 3D city models, which hinders advanced analytical capabilities in digital twins. To overcome this, the authors propose a novel concept termed “object-oriented radiometric fingerprinting,” integrating multi-temporal, multi-source mobile LiDAR observations with LOD3-level CityGML 3.0 semantic models. By constructing the 3DSensorDB geodatabase, they efficiently associate 312.4 million LiDAR beams with 6,368 semantic objects. The approach enables automatic extraction and consistent modeling of surface material characteristics across different scans and sensors, revealing recurrent patterns of dominant materials. The work further contributes an open-source release of the model, algorithms, and database, establishing a new paradigm for high-fidelity urban digital twins.

Technology Category

Application Category

📝 Abstract
Although semantic 3D city models are internationally available and becoming increasingly detailed, the incorporation of material information remains largely untapped. However, a structured representation of materials and their physical properties could substantially broaden the application spectrum and analytical capabilities for urban digital twins. At the same time, the growing number of repeated mobile laser scans of cities and their street spaces yields a wealth of observations influenced by the material characteristics of the corresponding surfaces. To leverage this information, we propose radiometric fingerprints of object surfaces by grouping LiDAR observations reflected from the same semantic object under varying distances, incident angles, environmental conditions, sensors, and scanning campaigns. Our study demonstrates how 312.4 million individual beams acquired across four campaigns using five LiDAR sensors on the Audi Autonomous Driving Dataset (A2D2) vehicle can be automatically associated with 6368 individual objects of the semantic 3D city model. The model comprises a comprehensive and semantic representation of four inner-city streets at Level of Detail (LOD) 3 with centimeter-level accuracy. It is based on the CityGML 3.0 standard and enables fine-grained sub-differentiation of objects. The extracted radiometric fingerprints for object surfaces reveal recurring intra-class patterns that indicate class-dominant materials. The semantic model, the method implementations, and the developed geodatabase solution 3DSensorDB are released under: https://github.com/tum-gis/sensordb
Problem

Research questions and friction points this paper is trying to address.

radiometric fingerprinting
semantic 3D city models
material characterization
mobile laser scanning
urban digital twins
Innovation

Methods, ideas, or system contributions that make the work stand out.

radiometric fingerprinting
semantic 3D city model
mobile laser scanning
material characterization
CityGML 3.0
🔎 Similar Papers
No similar papers found.