Robust Experts: the Effect of Adversarial Training on CNNs with Sparse Mixture-of-Experts Layers

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Balancing adversarial robustness and inference efficiency remains challenging for CNNs. Method: We propose embedding a single-layer sparse Mixture-of-Experts (MoE) into ResNet, coupled with adversarial training to enhance robustness. Crucially, we observe that optimizing expert routing via Switch Loss induces spontaneous routing collapse during adversarial training, causing certain experts to evolve into highly robust sub-paths—termed “robust experts.” Results: These robust experts significantly outperform both the full MoE ensemble and baseline models under PGD and AutoPGD attacks, without increasing inference FLOPs. On CIFAR-100, our approach achieves state-of-the-art robust accuracy. Importantly, this work uncovers, for the first time, a self-organizing emergence of robustness in sparse MoEs under adversarial training—a novel mechanism enabling lightweight, efficient, and robust model design.

Technology Category

Application Category

📝 Abstract
Robustifying convolutional neural networks (CNNs) against adversarial attacks remains challenging and often requires resource-intensive countermeasures. We explore the use of sparse mixture-of-experts (MoE) layers to improve robustness by replacing selected residual blocks or convolutional layers, thereby increasing model capacity without additional inference cost. On ResNet architectures trained on CIFAR-100, we find that inserting a single MoE layer in the deeper stages leads to consistent improvements in robustness under PGD and AutoPGD attacks when combined with adversarial training. Furthermore, we discover that when switch loss is used for balancing, it causes routing to collapse onto a small set of overused experts, thereby concentrating adversarial training on these paths and inadvertently making them more robust. As a result, some individual experts outperform the gated MoE model in robustness, suggesting that robust subpaths emerge through specialization. Our code is available at https://github.com/KASTEL-MobilityLab/robust-sparse-moes.
Problem

Research questions and friction points this paper is trying to address.

Improving CNN robustness against adversarial attacks
Using sparse MoE layers without increasing inference cost
Preventing routing collapse and enhancing expert specialization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Mixture-of-Experts layers replace CNN components
Adversarial training combined with MoE improves robustness
Specialized robust subpaths emerge through expert overuse
🔎 Similar Papers
No similar papers found.
S
Svetlana Pavlitska
Karlsruhe Institute of Technology (KIT), Germany; FZI Research Center for Information Technology, Germany
H
Haixi Fan
FZI Research Center for Information Technology, Germany
K
Konstantin Ditschuneit
Karlsruhe Institute of Technology (KIT), Germany
J. Marius Zöllner
J. Marius Zöllner
Professor at Karlsruhe Institute of Technology (KIT), Director at Forschungszentrum Informatik (FZI)
Intelligent VehiclesAutonomous DrivingRoboticsArtificial IntelligenceMachine Learning