🤖 AI Summary
To address the high annotation cost and low sample efficiency in semantic classification of single-photon LiDAR images, this paper proposes the first imaging-condition-aware active learning framework tailored for single-photon imagery. Our method jointly models predictive uncertainty and physical sensitivity to imaging conditions—such as photon budget, background noise, and surface reflectance—enabling environment-adaptive sample selection. We further integrate synthetic data augmentation to explicitly capture imaging condition variability. Evaluated on synthetic data, our approach achieves 97% accuracy using only 1.5% labeled samples; on real single-photon LiDAR data, it attains 90.63% accuracy with just 8% labeled data—surpassing the best baseline by 4.51% and approaching conventional image classification performance. To our knowledge, this is the first work to embed imaging physics priors into the active learning paradigm, establishing a novel methodology for resource-efficient single-photon visual understanding.
📝 Abstract
Single-photon LiDAR achieves high-precision 3D imaging in extreme environments through quantum-level photon detection technology. Current research primarily focuses on reconstructing 3D scenes from sparse photon events, whereas the semantic interpretation of single-photon images remains underexplored, due to high annotation costs and inefficient labeling strategies. This paper presents the first active learning framework for single-photon image classification. The core contribution is an imaging condition-aware sampling strategy that integrates synthetic augmentation to model variability across imaging conditions. By identifying samples where the model is both uncertain and sensitive to these conditions, the proposed method selectively annotates only the most informative examples. Experiments on both synthetic and real-world datasets show that our approach outperforms all baselines and achieves high classification accuracy with significantly fewer labeled samples. Specifically, our approach achieves 97% accuracy on synthetic single-photon data using only 1.5% labeled samples. On real-world data, we maintain 90.63% accuracy with just 8% labeled samples, which is 4.51% higher than the best-performing baseline. This illustrates that active learning enables the same level of classification performance on single-photon images as on classical images, opening doors to large-scale integration of single-photon data in real-world applications.