🤖 AI Summary
This work addresses the security evaluation of visual trackers in realistic scenarios by proposing Adaptive Meta-Gradient Adversarial Attack (AMGA). AMGA innovatively integrates meta-learning optimization with a stochastic multi-model ensemble, synergistically combining momentum-based gradient updates and Gaussian smoothing to establish a unified white-box and black-box iterative attack framework. Within this framework, it simultaneously enhances both adversarial efficacy and cross-model transferability. Experiments on OTB2015, LaSOT, and GOT-10k benchmarks demonstrate that AMGA significantly outperforms existing state-of-the-art methods, substantially narrows the performance gap between white-box and black-box attacks, and achieves strong cross-architecture generalization. The source code and datasets are publicly available.
📝 Abstract
In recent years, visual tracking methods based on convolutional neural networks and Transformers have achieved remarkable performance and have been successfully applied in fields such as autonomous driving. However, the numerous security issues exposed by deep learning models have gradually affected the reliable application of visual tracking methods in real-world scenarios. Therefore, how to reveal the security vulnerabilities of existing visual trackers through effective adversarial attacks has become a critical problem that needs to be addressed. To this end, we propose an adaptive meta-gradient adversarial attack (AMGA) method for visual tracking. This method integrates multi-model ensembles and meta-learning strategies, combining momentum mechanisms and Gaussian smoothing, which can significantly enhance the transferability and attack effectiveness of adversarial examples. AMGA randomly selects models from a large model repository, constructs diverse tracking scenarios, and iteratively performs both white- and black-box adversarial attacks in each scenario, optimizing the gradient directions of each model. This paradigm minimizes the gap between white- and black-box adversarial attacks, thus achieving excellent attack performance in black-box scenarios. Extensive experimental results on large-scale datasets such as OTB2015, LaSOT, and GOT-10k demonstrate that AMGA significantly improves the attack performance, transferability, and deception of adversarial examples. Codes and data are available at https://github.com/pgao-lab/AMGA.