Desensitizing for Improving Corruption Robustness in Point Cloud Classification through Adversarial Training

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In point cloud classification, deep neural networks (DNNs) exhibit severely degraded robustness under input corruptions—such as sensor noise and occlusion—due to over-reliance on local geometric features. This work pioneers the use of Shapley values in point cloud robustness analysis, enabling quantitative assessment of point- or region-wise sensitivity to model predictions. Building upon this, we propose a desensitization-based adversarial training framework: (i) spatial transformations simulate diverse corruptions; (ii) adversarial examples are generated to target low-sensitivity regions; and (iii) self-distillation mitigates information loss from feature suppression. Evaluated on ModelNet-C and PointCloud-C benchmarks, our method improves robustness against 15 corruption types by +12.7% on average, while preserving original clean-data accuracy—achieving joint optimization of robustness and accuracy.

Technology Category

Application Category

📝 Abstract
Due to scene complexity, sensor inaccuracies, and processing imprecision, point cloud corruption is inevitable. Over-reliance on input features is the root cause of DNN vulnerabilities. It remains unclear whether this issue exists in 3D tasks involving point clouds and whether reducing dependence on these features can enhance the model's robustness to corrupted point clouds. This study attempts to answer these questions. Specifically, we quantified the sensitivity of the DNN to point cloud features using Shapley values and found that models trained using traditional methods exhibited high sensitivity values for certain features. Furthermore, under an equal pruning ratio, prioritizing the pruning of highly sensitive features causes more severe damage to model performance than random pruning. We propose `Desensitized Adversarial Training' (DesenAT), generating adversarial samples using feature desensitization and conducting training within a self-distillation framework, which aims to alleviate DNN's over-reliance on point clouds features by smoothing sensitivity. First, data points with high contribution components are eliminated, and spatial transformation is used to simulate corruption scenes, generate adversarial samples, and conduct adversarial training on the model. Next, to compensate for information loss in adversarial samples, we use the self-distillation method to transfer knowledge from clean samples to adversarial samples, and perform adversarial training in a distillation manner.Extensive experiments on ModelNet-C and PointCloud-C demonstrate show that the propose method can effectively improve the robustness of the model without reducing the performance of clean data sets. This code is publicly available at href{https://github.com/JerkyT/DesenAT/tree/master}{https://github.com/JerkyT/DesenAT}.
Problem

Research questions and friction points this paper is trying to address.

Addresses DNN vulnerability to corrupted point clouds in 3D classification
Reduces over-reliance on sensitive features via adversarial training
Improves robustness without compromising clean dataset performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial training with feature desensitization for robustness
Self-distillation transfers knowledge from clean to adversarial samples
Spatial transformation simulates corruption to generate adversarial samples
🔎 Similar Papers
No similar papers found.
Zhiqiang Tian
Zhiqiang Tian
Xi'an Jiaotong University
Computer VisionMedical Image AnalysisRobotics
W
Weigang Li
School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan 430081, China
Chunhua Deng
Chunhua Deng
School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, 430065, China
Junwei Hu
Junwei Hu
Undergraduate Student of Software Engineering, Tongji University
Software EngineeringAI4SESE4AINLP
Y
Yongqiang Wang
School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan 430081, China
W
Wenping Liu
School of Information Management and Institute of Big Data and Digital Economy, Hubei University of Economics, Wuhan, China