Defending against adversarial attacks using mixture of experts

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of machine learning models against diverse threats—including adversarial examples, data poisoning, and model extraction—this paper proposes an end-to-end adversarial training framework based on a Mixture of Experts (MoE). The framework integrates nine ResNet-18 experts with a learnable gating mechanism and, for the first time, deeply embeds adversarial training into the MoE architecture, enabling joint optimization of expert parameters and routing policies. Compared to complex monolithic models, our approach achieves superior robustness using a significantly lighter backbone: it substantially outperforms existing defenses and stronger baseline models under standard adversarial attacks. This demonstrates the dual advantage of lightweight MoE architectures—enhanced robustness without compromising computational efficiency.

Technology Category

Application Category

📝 Abstract
Machine learning is a powerful tool enabling full automation of a huge number of tasks without explicit programming. Despite recent progress of machine learning in different domains, these models have shown vulnerabilities when they are exposed to adversarial threats. Adversarial threats aim to hinder the machine learning models from satisfying their objectives. They can create adversarial perturbations, which are imperceptible to humans' eyes but have the ability to cause misclassification during inference. Moreover, they can poison the training data to harm the model's performance or they can query the model to steal its sensitive information. In this paper, we propose a defense system, which devises an adversarial training module within mixture-of-experts architecture to enhance its robustness against adversarial threats. In our proposed defense system, we use nine pre-trained experts with ResNet-18 as their backbone. During end-to-end training, the parameters of expert models and gating mechanism are jointly updated allowing further optimization of the experts. Our proposed defense system outperforms state-of-the-art defense systems and plain classifiers, which use a more complex architecture than our model's backbone.
Problem

Research questions and friction points this paper is trying to address.

Defending machine learning models against adversarial attacks
Enhancing robustness using mixture-of-experts architecture
Improving performance over complex state-of-the-art systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-experts architecture for adversarial defense
Joint training of experts and gating mechanism
Pre-trained ResNet-18 experts enhance robustness
🔎 Similar Papers
No similar papers found.