Towards Adaptive Meta-Gradient Adversarial Examples for Visual Tracking

📅 2025-05-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the security evaluation of visual trackers in realistic scenarios by proposing Adaptive Meta-Gradient Adversarial Attack (AMGA). AMGA innovatively integrates meta-learning optimization with a stochastic multi-model ensemble, synergistically combining momentum-based gradient updates and Gaussian smoothing to establish a unified white-box and black-box iterative attack framework. Within this framework, it simultaneously enhances both adversarial efficacy and cross-model transferability. Experiments on OTB2015, LaSOT, and GOT-10k benchmarks demonstrate that AMGA significantly outperforms existing state-of-the-art methods, substantially narrows the performance gap between white-box and black-box attacks, and achieves strong cross-architecture generalization. The source code and datasets are publicly available.

Technology Category

Application Category

📝 Abstract
In recent years, visual tracking methods based on convolutional neural networks and Transformers have achieved remarkable performance and have been successfully applied in fields such as autonomous driving. However, the numerous security issues exposed by deep learning models have gradually affected the reliable application of visual tracking methods in real-world scenarios. Therefore, how to reveal the security vulnerabilities of existing visual trackers through effective adversarial attacks has become a critical problem that needs to be addressed. To this end, we propose an adaptive meta-gradient adversarial attack (AMGA) method for visual tracking. This method integrates multi-model ensembles and meta-learning strategies, combining momentum mechanisms and Gaussian smoothing, which can significantly enhance the transferability and attack effectiveness of adversarial examples. AMGA randomly selects models from a large model repository, constructs diverse tracking scenarios, and iteratively performs both white- and black-box adversarial attacks in each scenario, optimizing the gradient directions of each model. This paradigm minimizes the gap between white- and black-box adversarial attacks, thus achieving excellent attack performance in black-box scenarios. Extensive experimental results on large-scale datasets such as OTB2015, LaSOT, and GOT-10k demonstrate that AMGA significantly improves the attack performance, transferability, and deception of adversarial examples. Codes and data are available at https://github.com/pgao-lab/AMGA.
Problem

Research questions and friction points this paper is trying to address.

Enhancing adversarial attack transferability in visual tracking
Bridging white-box and black-box attack performance gaps
Improving security vulnerability assessment for visual trackers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive meta-gradient adversarial attack method
Multi-model ensembles and meta-learning strategies
Momentum mechanisms and Gaussian smoothing integration
🔎 Similar Papers
No similar papers found.
W
Wei-Long Tian
School of Cyber Science and Engineering, Qufu Normal University, China
P
Peng Gao
School of Cyber Science and Engineering, Qufu Normal University, China
X
Xiao Liu
School of Cyber Science and Engineering, Qufu Normal University, China
Long Xu
Long Xu
Ningbo University, Peng Cheng Laboratory
image/signal processingvideo codingespecially rate control of video codingimage/signal
Hamido Fujita
Hamido Fujita
Iwate Prefectural University
Machine Learning
H
Hanan Aljuai
Computer Sciences Department, Princess Nourah bint Abdulrahman University, Saudi Arabia
M
Mao-Li Wang
School of Cyber Science and Engineering, Qufu Normal University, China