๐ค AI Summary
Modern deep vision models rely on fragile pixel-level representations, rendering them highly susceptible to adversarial perturbations; conventional defenses operate solely in the pixel domain and fail to model intrinsically robust features. Method: We propose a multimodal defense framework thatโ for the first timeโcouples SIFT keypoints (providing scale- and rotation-invariant local structural priors) with a graph attention network (GAT) to construct non-pixel, cross-modal robust feature maps, which are jointly leveraged with ViT or CNN backbones for inference. Contribution: The framework significantly enhances model robustness against white-box gradient-based attacks while incurring only marginal degradation in clean accuracy. It introduces two key innovations: (i) structural-aware enhancement via geometrically invariant keypoints, and (ii) effective cross-modal feature fusion between handcrafted local descriptors and learned deep representations.
๐ Abstract
Adversarial attacks expose a fundamental vulnerability in modern deep vision models by exploiting their dependence on dense, pixel-level representations that are highly sensitive to imperceptible perturbations. Traditional defense strategies typically operate within this fragile pixel domain, lacking mechanisms to incorporate inherently robust visual features. In this work, we introduce SIFT-Graph, a multimodal defense framework that enhances the robustness of traditional vision models by aggregating structurally meaningful features extracted from raw images using both handcrafted and learned modalities. Specifically, we integrate Scale-Invariant Feature Transform keypoints with a Graph Attention Network to capture scale and rotation invariant local structures that are resilient to perturbations. These robust feature embeddings are then fused with traditional vision model, such as Vision Transformer and Convolutional Neural Network, to form a unified, structure-aware and perturbation defensive model. Preliminary results demonstrate that our method effectively improves the visual model robustness against gradient-based white box adversarial attacks, while incurring only a marginal drop in clean accuracy.