🤖 AI Summary
This study investigates methods to enhance the generalization capability of adversarial patches against real-time object detectors and evaluates the effectiveness of adversarial training as a defense mechanism. Focusing on YOLOv10, the authors propose a high-order adversarial patch generation approach that iteratively optimizes the patch and the detector through alternating updates in an adversarial training framework. Experimental results demonstrate that the high-order patches not only effectively evade detection in white-box settings but also significantly outperform low-order patches in cross-model transfer attacks, exhibiting superior generalization. Furthermore, the work reveals that adversarial training alone is insufficient to robustly defend against such high-order attacks, highlighting its inherent limitations in mitigating advanced adversarial threats.
📝 Abstract
Higher-order adversarial attacks can directly be considered the result of a cat-and-mouse game -- an elaborate action involving constant pursuit, near captures, and repeated escapes. This idiom describes the enduring circular training of adversarial attack patterns and adversarial training the best. The following work investigates the impact of higher-order adversarial attacks on object detectors by successively training attack patterns and hardening object detectors with adversarial training. The YOLOv10 object detector is chosen as a representative, and adversarial patches are used in an evasion attack manner. Our results indicate that higher-order adversarial patches are not only affecting the object detector directly trained on but rather provide a stronger generalization capacity compared to lower-order adversarial patches. Moreover, the results highlight that solely adversarial training is not sufficient to harden an object detector efficiently against this kind of adversarial attack. Code: https://github.com/JensBayer/HigherOrder