Can Go AIs be adversarially robust?

📅 2024-06-18
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the adversarial robustness of superhuman Go AI against cyclic adversarial attacks. We systematically evaluate vulnerability to such attacks and propose three defense strategies: handcrafted adversarial examples, iterative adversarial training, and network architecture modifications. Empirical evaluation is conducted across multiple models and unseen adversaries to assess generalization. Results show that existing defenses fail against novel cyclic attack variants; cyclic attacks exhibit strong cross-model transferability; and the fundamental bottleneck lies in insufficient training diversity and the difficulty of transferring robust knowledge. To our knowledge, this work is the first to reveal the pervasive threat posed by cyclic attacks in high-performance AI systems. It identifies two critical directions for overcoming current robustness limitations: expanding training distribution coverage and designing transferable robust mechanisms.

Technology Category

Application Category

📝 Abstract
Prior work found that superhuman Go AIs can be defeated by simple adversarial strategies, especially"cyclic"attacks. In this paper, we study whether adding natural countermeasures can achieve robustness in Go, a favorable domain for robustness since it benefits from incredible average-case capability and a narrow, innately adversarial setting. We test three defenses: adversarial training on hand-constructed positions, iterated adversarial training, and changing the network architecture. We find that though some of these defenses protect against previously discovered attacks, none withstand freshly trained adversaries. Furthermore, most of the reliably effective attacks these adversaries discover are different realizations of the same overall class of cyclic attacks. Our results suggest that building robust AI systems is challenging even with extremely superhuman systems in some of the most tractable settings, and highlight two key gaps: efficient generalization of defenses, and diversity in training. For interactive examples of attacks and a link to our codebase, see https://goattack.far.ai.
Problem

Research questions and friction points this paper is trying to address.

Go AI
Adversarial Attacks
Cyclic Strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Go AI Defense
Malicious Attack Resistance
Adversarial Robustness Strategies
🔎 Similar Papers
No similar papers found.
Tom Tseng
Tom Tseng
FAR AI
E
Euan McLean
FAR.AI
Kellin Pelrine
Kellin Pelrine
FAR.AI
AI SecurityAI Agents
T
T. T. Wang
MIT
A
A. Gleave
FAR.AI