Rethinking Gradient-based Adversarial Attacks on Point Cloud Classification

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing gradient-based adversarial attacks on point clouds overlook point cloud heterogeneity, leading to excessive and perceptible perturbations. To address this, we propose WAAttack—a novel framework that (i) introduces weighted gradient updates and an adaptive step-size mechanism to explicitly model local structural sensitivity, and (ii) designs SubAttack, a subset-optimization strategy that allocates perturbations heterogeneity-driven by focusing on critical regions. Evaluated on mainstream models (PointNet, PointNet++) and standard benchmarks (ModelNet40, ShapeNet), WAAttack achieves state-of-the-art attack success rates while significantly reducing average perturbation magnitude (↓32.7%) and geometric distortion—measured by Chamfer distance (↓28.4%). The resulting adversarial examples are of high fidelity and imperceptible to human vision.

Technology Category

Application Category

📝 Abstract
Gradient-based adversarial attacks have become a dominant approach for evaluating the robustness of point cloud classification models. However, existing methods often rely on uniform update rules that fail to consider the heterogeneous nature of point clouds, resulting in excessive and perceptible perturbations. In this paper, we rethink the design of gradient-based attacks by analyzing the limitations of conventional gradient update mechanisms and propose two new strategies to improve both attack effectiveness and imperceptibility. First, we introduce WAAttack, a novel framework that incorporates weighted gradients and an adaptive step-size strategy to account for the non-uniform contribution of points during optimization. This approach enables more targeted and subtle perturbations by dynamically adjusting updates according to the local structure and sensitivity of each point. Second, we propose SubAttack, a complementary strategy that decomposes the point cloud into subsets and focuses perturbation efforts on structurally critical regions. Together, these methods represent a principled rethinking of gradient-based adversarial attacks for 3D point cloud classification. Extensive experiments demonstrate that our approach outperforms state-of-the-art baselines in generating highly imperceptible adversarial examples. Code will be released upon paper acceptance.
Problem

Research questions and friction points this paper is trying to address.

Improving gradient-based attacks on point cloud classification
Addressing excessive perturbations in existing attack methods
Enhancing attack effectiveness and imperceptibility simultaneously
Innovation

Methods, ideas, or system contributions that make the work stand out.

Weighted gradients for targeted perturbations
Adaptive step-size strategy optimization
Subset decomposition focusing critical regions
🔎 Similar Papers
No similar papers found.
J
Jun Chen
School of Computing and Artificial Intelligence, Southwest Jiaotong University; Engineering Research Center of Sustainable Urban Intelligent Transportation Ministry of Education, Chengdu, China
X
Xinke Li
College of Computing, City University of Hong Kong
M
Mingyue Xu
SWJTU-Leeds Joint School, Southwest Jiaotong University
Tianrui Li
Tianrui Li
School of Computing and Artificial Intelligence, Southwest Jiaotong University
Big Data IntelligenceUrban ComputingGranular Computing
Chongshou Li
Chongshou Li
School of Computing and Artificial Intelligence, Southwest Jiaotong University
Hierarchical LearningPoint Cloud LearningRobust LearningMachine Learning