🤖 AI Summary
To address the challenges of inaccessible target model information and low adversarial transferability in black-box 3D point cloud attacks, this paper proposes a key-feature-guided transferable attack method. Our approach leverages cross-model-consistent critical geometric and semantic features to design a feature-importance-driven adversarial search strategy, jointly optimizing transferability, imperceptibility (via an L∞-norm constraint), and structural fidelity in the loss function. Methodologically, we introduce the first transferability-enhancing prior grounded in cross-model key-feature consistency—requiring no assumptions about or queries to the target model. Extensive experiments on ModelNet40 and ScanObjectNN demonstrate state-of-the-art performance: our method achieves an average 12.7% higher transfer success rate than prior works, while reducing perturbation magnitude by 38%, thereby significantly balancing attack effectiveness and visual stealth of point cloud adversarial examples.
📝 Abstract
Deep neural networks for 3D point clouds have been demonstrated to be vulnerable to adversarial examples. Previous 3D adversarial attack methods often exploit certain information about the target models, such as model parameters or outputs, to generate adversarial point clouds. However, in realistic scenarios, it is challenging to obtain any information about the target models under conditions of absolute security. Therefore, we focus on transfer-based attacks, where generating adversarial point clouds does not require any information about the target models. Based on our observation that the critical features used for point cloud classification are consistent across different DNN architectures, we propose CFG, a novel transfer-based black-box attack method that improves the transferability of adversarial point clouds via the proposed Critical Feature Guidance. Specifically, our method regularizes the search of adversarial point clouds by computing the importance of the extracted features, prioritizing the corruption of critical features that are likely to be adopted by diverse architectures. Further, we explicitly constrain the maximum deviation extent of the generated adversarial point clouds in the loss function to ensure their imperceptibility. Extensive experiments conducted on the ModelNet40 and ScanObjectNN benchmark datasets demonstrate that the proposed CFG outperforms the state-of-the-art attack methods by a large margin.