🤖 AI Summary
To address the trade-off between inference efficiency and accuracy in neural networks, this paper proposes a subspace-based neuron pruning method. First, an orthogonal subspace—equivalent to a non-normalized Gram–Schmidt process—is constructed via triangular transformation matrices, and neuron activations are projected onto it. Next, layer-wise pruning ratios are adaptively determined based on activation magnitudes (i.e., cumulative variance) within the subspace, and performance is recovered via linear least-squares reconstruction. Key contributions include: (i) the first use of triangular transformations for efficient orthogonalization; (ii) optimized orthogonalization ordering to enhance neuron importance estimation; and (iii) a fully automatic, activation-scale-driven pruning ratio allocation mechanism. On ImageNet, the method achieves state-of-the-art (SOTA) results for VGG-16 pruning and matches or exceeds more complex SOTA approaches for ResNet-50 across multiple pruning ratios.
📝 Abstract
Efficiency of neural network inference is undeniably important in a time where commercial use of AI models increases daily. Node pruning is the art of removing computational units such as neurons, filters, attention heads, or even entire layers to significantly reduce inference time while retaining network performance. In this work, we propose the projection of unit activations to an orthogonal subspace in which there is no redundant activity and within which we may prune nodes while simultaneously recovering the impact of lost units via linear least squares. We identify that, for effective node pruning, this subspace must be constructed using a triangular transformation matrix, a transformation which is equivalent to and unnormalized Gram-Schmidt orthogonalization. We furthermore show that the order in which units are orthogonalized can be optimised to maximally reduce node activations in our subspace and thereby form a more optimal ranking of nodes. Finally, we leverage these orthogonal subspaces to automatically determine layer-wise pruning ratios based upon the relative scale of node activations in our subspace, equivalent to cumulative variance. Our proposed method reaches state of the art when pruning ImageNet trained VGG-16 and rivals more complex state of the art methods when pruning ResNet-50 networks across a range of pruning ratios.