Towards a 3D Transfer-based Black-box Attack via Critical Feature Guidance

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of inaccessible target model information and low adversarial transferability in black-box 3D point cloud attacks, this paper proposes a key-feature-guided transferable attack method. Our approach leverages cross-model-consistent critical geometric and semantic features to design a feature-importance-driven adversarial search strategy, jointly optimizing transferability, imperceptibility (via an L∞-norm constraint), and structural fidelity in the loss function. Methodologically, we introduce the first transferability-enhancing prior grounded in cross-model key-feature consistency—requiring no assumptions about or queries to the target model. Extensive experiments on ModelNet40 and ScanObjectNN demonstrate state-of-the-art performance: our method achieves an average 12.7% higher transfer success rate than prior works, while reducing perturbation magnitude by 38%, thereby significantly balancing attack effectiveness and visual stealth of point cloud adversarial examples.

Technology Category

Application Category

📝 Abstract
Deep neural networks for 3D point clouds have been demonstrated to be vulnerable to adversarial examples. Previous 3D adversarial attack methods often exploit certain information about the target models, such as model parameters or outputs, to generate adversarial point clouds. However, in realistic scenarios, it is challenging to obtain any information about the target models under conditions of absolute security. Therefore, we focus on transfer-based attacks, where generating adversarial point clouds does not require any information about the target models. Based on our observation that the critical features used for point cloud classification are consistent across different DNN architectures, we propose CFG, a novel transfer-based black-box attack method that improves the transferability of adversarial point clouds via the proposed Critical Feature Guidance. Specifically, our method regularizes the search of adversarial point clouds by computing the importance of the extracted features, prioritizing the corruption of critical features that are likely to be adopted by diverse architectures. Further, we explicitly constrain the maximum deviation extent of the generated adversarial point clouds in the loss function to ensure their imperceptibility. Extensive experiments conducted on the ModelNet40 and ScanObjectNN benchmark datasets demonstrate that the proposed CFG outperforms the state-of-the-art attack methods by a large margin.
Problem

Research questions and friction points this paper is trying to address.

Developing transfer-based black-box attacks for 3D point clouds
Improving adversarial example transferability across different architectures
Ensuring imperceptible perturbations while maintaining attack effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Critical Feature Guidance for transferability
Regularizes adversarial search via feature importance
Constrains deviation for imperceptibility in loss
🔎 Similar Papers
No similar papers found.
Shuchao Pang
Shuchao Pang
University of New South Wales
Medical image analysisdeep learning
Z
Zhenghan Chen
STCA, Microsoft
Shen Zhang
Shen Zhang
MEGVII
Deep LearningComputer Vision
L
Liming Lu
Nanjing University of Science and Technology
Siyuan Liang
Siyuan Liang
College of Computing and Data Science, Nanyang Technological University
Trustworthy Foundation Model
A
Anan Du
Nanjing University of Industry Technology
Y
Yongbin Zhou
Nanjing University of Science and Technology