Explicit Group Sparse Projection with Applications to Deep Learning and NMF

📅 2019-12-09
🏛️ Trans. Mach. Learn. Res.
📈 Citations: 10
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of explicitly controlling the average sparsity—measured by the Hoyer metric—in sparse projections of vector sets. We propose the first group-level explicit sparsity projection method: it directly specifies a target average sparsity level and jointly optimizes the sparsity patterns of all vectors, eliminating per-vector processing or reliance on implicit regularization parameters. Our approach generalizes the weighted ℓ₁ norm, enabling flexible sparsity modeling with linear-time computational complexity. The key innovation is the first formulation that imposes interpretable, tunable average sparsity constraints over an entire vector group. Experiments demonstrate substantial improvements in the accuracy–sparsity trade-off for ResNet50 pruning, and competitive reconstruction error and classification performance on CIFAR-10, ImageNet, and non-negative matrix factorization tasks.
📝 Abstract
We design a new sparse projection method for a set of vectors that guarantees a desired average sparsity level measured leveraging the popular Hoyer measure (an affine function of the ratio of the $ell_1$ and $ell_2$ norms). Existing approaches either project each vector individually or require the use of a regularization parameter which implicitly maps to the average $ell_0$-measure of sparsity. Instead, in our approach we set the sparsity level for the whole set explicitly and simultaneously project a group of vectors with the sparsity level of each vector tuned automatically. We show that the computational complexity of our projection operator is linear in the size of the problem. Additionally, we propose a generalization of this projection by replacing the $ell_1$ norm by its weighted version. We showcase the efficacy of our approach in both supervised and unsupervised learning tasks on image datasets including CIFAR10 and ImageNet. In deep neural network pruning, the sparse models produced by our method on ResNet50 have significantly higher accuracies at corresponding sparsity values compared to existing competitors. In nonnegative matrix factorization, our approach yields competitive reconstruction errors against state-of-the-art algorithms.
Problem

Research questions and friction points this paper is trying to address.

Designs a sparse projection method for vector groups with explicit average sparsity control
Proposes a computationally efficient projection operator linear in problem size
Applies the method to deep learning pruning and nonnegative matrix factorization tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Group sparse projection with explicit average sparsity control
Linear computational complexity in problem size
Weighted ℓ1 norm generalization for enhanced flexibility
🔎 Similar Papers
No similar papers found.
R
Riyasat Ohib
TReNDS Center, Georgia Institute of Technology
Nicolas Gillis
Nicolas Gillis
University of Mons
optimizationdata sciencenumerical linear algebrasignal processing
Niccolò Dalmasso
Niccolò Dalmasso
J.P. Morgan AI Research
Machine Learning
S
Sameena Shah
J.P. Morgan AI Research
V
Vamsi K. Potluru
J.P. Morgan AI Research
S
S. Plis
TReNDS Center