🤖 AI Summary
This work addresses the challenge of maintaining robustness and accuracy in neural networks under extreme sparsity (99%). The authors propose an adaptive topology training method that dynamically reconfigures a three-layer sparse connectivity structure at each training epoch, complemented by a single dense layer for image classification tasks. Experiments on MNIST and Fashion-MNIST demonstrate that the approach achieves competitive classification accuracy while substantially reducing parameter count. Moreover, the model exhibits notable robustness against various perturbations—including random connection removal, adversarial attacks, and weight permutation—highlighting the effectiveness of dynamic sparse topologies in simultaneously enhancing model efficiency and resilience.
📝 Abstract
We investigate the robustness of sparse artificial neural networks trained with adaptive topology. We focus on a simple yet effective architecture consisting of three sparse layers with 99% sparsity followed by a dense layer, applied to image classification tasks such as MNIST and Fashion MNIST. By updating the topology of the sparse layers between each epoch, we achieve competitive accuracy despite the significantly reduced number of weights. Our primary contribution is a detailed analysis of the robustness of these networks, exploring their performance under various perturbations including random link removal, adversarial attack, and link weight shuffling. Through extensive experiments, we demonstrate that adaptive topology not only enhances efficiency but also maintains robustness. This work highlights the potential of adaptive sparse networks as a promising direction for developing efficient and reliable deep learning models.