🤖 AI Summary
This study investigates the robustness of complex-valued neural networks (CVNNs) under adversarial perturbations, with a particular focus on their sensitivity to phase information. To this end, we introduce the first "phase attack"—a novel adversarial method specifically designed to perturb the phase component of complex inputs—and develop complex-valued counterparts of classical adversarial attacks, leveraging gradient computation in the complex domain for systematic evaluation. Experimental results demonstrate that, despite exhibiting superior robustness in certain tasks compared to real-valued networks, CVNNs are highly vulnerable to phase perturbations: under equivalent perturbation magnitudes, phase attacks induce significantly greater performance degradation than conventional attacks, revealing a unique and critical vulnerability in the security of complex-valued representations.
📝 Abstract
Complex-valued neural networks (CVNNs) are rising in popularity for all kinds of applications. To safely use CVNNs in practice, analyzing their robustness against outliers is crucial. One well known technique to understand the behavior of deep neural networks is to investigate their behavior under adversarial attacks, which can be seen as worst case minimal perturbations. We design Phase Attacks, a kind of attack specifically targeting the phase information of complex-valued inputs. Additionally, we derive complex-valued versions of commonly used adversarial attacks. We show that in some scenarios CVNNs are more robust than RVNNs and that both are very susceptible to phase changes with the Phase Attacks decreasing the model performance more, than equally strong regular attacks, which can attack both phase and magnitude.