🤖 AI Summary
In point cloud classification, deep neural networks (DNNs) exhibit severely degraded robustness under input corruptions—such as sensor noise and occlusion—due to over-reliance on local geometric features. This work pioneers the use of Shapley values in point cloud robustness analysis, enabling quantitative assessment of point- or region-wise sensitivity to model predictions. Building upon this, we propose a desensitization-based adversarial training framework: (i) spatial transformations simulate diverse corruptions; (ii) adversarial examples are generated to target low-sensitivity regions; and (iii) self-distillation mitigates information loss from feature suppression. Evaluated on ModelNet-C and PointCloud-C benchmarks, our method improves robustness against 15 corruption types by +12.7% on average, while preserving original clean-data accuracy—achieving joint optimization of robustness and accuracy.
📝 Abstract
Due to scene complexity, sensor inaccuracies, and processing imprecision, point cloud corruption is inevitable. Over-reliance on input features is the root cause of DNN vulnerabilities. It remains unclear whether this issue exists in 3D tasks involving point clouds and whether reducing dependence on these features can enhance the model's robustness to corrupted point clouds. This study attempts to answer these questions. Specifically, we quantified the sensitivity of the DNN to point cloud features using Shapley values and found that models trained using traditional methods exhibited high sensitivity values for certain features. Furthermore, under an equal pruning ratio, prioritizing the pruning of highly sensitive features causes more severe damage to model performance than random pruning. We propose `Desensitized Adversarial Training' (DesenAT), generating adversarial samples using feature desensitization and conducting training within a self-distillation framework, which aims to alleviate DNN's over-reliance on point clouds features by smoothing sensitivity. First, data points with high contribution components are eliminated, and spatial transformation is used to simulate corruption scenes, generate adversarial samples, and conduct adversarial training on the model. Next, to compensate for information loss in adversarial samples, we use the self-distillation method to transfer knowledge from clean samples to adversarial samples, and perform adversarial training in a distillation manner.Extensive experiments on ModelNet-C and PointCloud-C demonstrate show that the propose method can effectively improve the robustness of the model without reducing the performance of clean data sets. This code is publicly available at href{https://github.com/JerkyT/DesenAT/tree/master}{https://github.com/JerkyT/DesenAT}.