🤖 AI Summary
Balancing adversarial robustness and inference efficiency remains challenging for CNNs. Method: We propose embedding a single-layer sparse Mixture-of-Experts (MoE) into ResNet, coupled with adversarial training to enhance robustness. Crucially, we observe that optimizing expert routing via Switch Loss induces spontaneous routing collapse during adversarial training, causing certain experts to evolve into highly robust sub-paths—termed “robust experts.” Results: These robust experts significantly outperform both the full MoE ensemble and baseline models under PGD and AutoPGD attacks, without increasing inference FLOPs. On CIFAR-100, our approach achieves state-of-the-art robust accuracy. Importantly, this work uncovers, for the first time, a self-organizing emergence of robustness in sparse MoEs under adversarial training—a novel mechanism enabling lightweight, efficient, and robust model design.
📝 Abstract
Robustifying convolutional neural networks (CNNs) against adversarial attacks remains challenging and often requires resource-intensive countermeasures. We explore the use of sparse mixture-of-experts (MoE) layers to improve robustness by replacing selected residual blocks or convolutional layers, thereby increasing model capacity without additional inference cost. On ResNet architectures trained on CIFAR-100, we find that inserting a single MoE layer in the deeper stages leads to consistent improvements in robustness under PGD and AutoPGD attacks when combined with adversarial training. Furthermore, we discover that when switch loss is used for balancing, it causes routing to collapse onto a small set of overused experts, thereby concentrating adversarial training on these paths and inadvertently making them more robust. As a result, some individual experts outperform the gated MoE model in robustness, suggesting that robust subpaths emerge through specialization. Our code is available at https://github.com/KASTEL-MobilityLab/robust-sparse-moes.