š¤ AI Summary
Modern neural networks suffer from computational redundancy due to uniform neuron activation across all inputs. To address this, we propose MID-Lāa lightweight, input-adaptive, model-agnostic dynamic sparsification module. MID-L employs differentiable Top-k selection and input-conditioned gating to generate learnable masks, enabling per-sample dynamic activation of critical neurons via matrix interpolation between dual pathways. It is the first method to jointly achieve dynamic sparsity, end-to-end differentiability, and architectural generality. Leveraging mutual information-driven sparse optimization and FLOPs-aware design, MID-L reduces average neuron activation by 55% and inference computation by 1.7Ć across six benchmarks, while maintaining or improving accuracy. Moreover, it significantly enhances generalization performance and robustness to input noise.
š Abstract
Modern neural networks often activate all neurons for every input, leading to unnecessary computation and inefficiency. We introduce Matrix-Interpolated Dropout Layer (MID-L), a novel module that dynamically selects and activates only the most informative neurons by interpolating between two transformation paths via a learned, input-dependent gating vector. Unlike conventional dropout or static sparsity methods, MID-L employs a differentiable Top-k masking strategy, enabling per-input adaptive computation while maintaining end-to-end differentiability. MID-L is model-agnostic and integrates seamlessly into existing architectures. Extensive experiments on six benchmarks, including MNIST, CIFAR-10, CIFAR-100, SVHN, UCI Adult, and IMDB, show that MID-L achieves up to average 55% reduction in active neurons, 1.7$ imes$ FLOPs savings, and maintains or exceeds baseline accuracy. We further validate the informativeness and selectivity of the learned neurons via Sliced Mutual Information (SMI) and observe improved robustness under overfitting and noisy data conditions. Additionally, MID-L demonstrates favorable inference latency and memory usage profiles, making it suitable for both research exploration and deployment on compute-constrained systems. These results position MID-L as a general-purpose, plug-and-play dynamic computation layer, bridging the gap between dropout regularization and efficient inference.