ModelNet40-E: An Uncertainty-Aware Benchmark for Point Cloud Classification

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks for robustness evaluation of point cloud classification lack capabilities for realistic noise modeling and per-point uncertainty annotation, hindering fine-grained uncertainty-aware analysis. This work introduces the first LiDAR point cloud benchmark jointly designed for robustness and calibration assessment. It features controllably synthesized LiDAR-like noise—including explicitly parameterized Gaussian noise—and provides corresponding per-point uncertainty ground truth. We conduct multi-dimensional evaluation across classification accuracy, calibration error, and uncertainty consistency using representative models: PointNet, DGCNN, and Point Transformer v3. Experimental results demonstrate consistent performance degradation under noise across all models; however, Point Transformer v3 achieves superior uncertainty calibration—its predicted uncertainties exhibit strong agreement with empirical measurement errors. These findings validate the benchmark’s effectiveness and necessity in advancing uncertainty-aware point cloud classification research.

Technology Category

Application Category

📝 Abstract
We introduce ModelNet40-E, a new benchmark designed to assess the robustness and calibration of point cloud classification models under synthetic LiDAR-like noise. Unlike existing benchmarks, ModelNet40-E provides both noise-corrupted point clouds and point-wise uncertainty annotations via Gaussian noise parameters (σ, μ), enabling fine-grained evaluation of uncertainty modeling. We evaluate three popular models-PointNet, DGCNN, and Point Transformer v3-across multiple noise levels using classification accuracy, calibration metrics, and uncertainty-awareness. While all models degrade under increasing noise, Point Transformer v3 demonstrates superior calibration, with predicted uncertainties more closely aligned with the underlying measurement uncertainty.
Problem

Research questions and friction points this paper is trying to address.

Assess robustness of point cloud classification under noise
Evaluate uncertainty modeling with noise-corrupted data
Compare model calibration and uncertainty-awareness across noise levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces ModelNet40-E benchmark with noise-corrupted data
Provides point-wise uncertainty annotations via Gaussian parameters
Evaluates models using accuracy, calibration, and uncertainty-awareness
🔎 Similar Papers
No similar papers found.