🤖 AI Summary
This work addresses a critical limitation in existing 3D point cloud adversarial attacks, which predominantly focus on geometric perturbations while neglecting the influence of topological structure on model robustness. To bridge this gap, the paper introduces the first topology-aware adversarial attack framework that treats topological features as an explicit attack dimension. The proposed method employs an end-to-end differentiable architecture leveraging differentiable persistent homology representations and persistence diagram embeddings to jointly optimize a composite objective comprising topological discrepancy loss, misclassification loss, and geometric imperceptibility constraints. By enabling gradient-guided perturbations that alter semantic interpretation without compromising geometric fidelity, the approach challenges the conventional assumption that geometric preservation entails semantic consistency. Extensive experiments demonstrate state-of-the-art performance, achieving up to 100% attack success rates against PointNet and DGCNN across ModelNet40, ShapeNet Part, and ScanObjectNN benchmarks, while significantly outperforming existing methods across multiple perceptual metrics.
📝 Abstract
Deep neural networks for 3D point cloud understanding have achieved remarkable success in object classification and recognition, yet recent work shows that these models remain highly vulnerable to adversarial perturbations. Existing 3D attacks predominantly manipulate geometric properties such as point locations, curvature, or surface structure, implicitly assuming that preserving global shape fidelity preserves semantic content. In this work, we challenge this assumption and introduce the first topology-driven adversarial attack for point cloud deep learning. Our key insight is that the homological structure of a 3D object constitutes a previously unexplored vulnerability surface. We propose Topo-ADV, an end-to-end differentiable framework that incorporates persistent homology as an explicit optimization objective, enabling gradient-based manipulation of topological features during adversarial example generation. By embedding persistence diagrams through differentiable topological representations, our method jointly optimizes (i) a topology divergence loss that alters persistence, (ii) a misclassification objective, and (iii) geometric imperceptibility constraints that preserve visual plausibility. Experiments demonstrate that subtle topology-driven perturbations consistently achieve up to 100% attack success rates on benchmark datasets such as ModelNet40, ShapeNet Part, and ScanObjectNN using PointNet and DGCNN classifiers, while remaining geometrically indistinguishable from the original point clouds, beating state-of-the-art methods on various perceptibility metrics.