Deep greedy unfolding: Sorting out argsorting in greedy sparse recovery algorithms

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Greedy sparse recovery algorithms (e.g., OMP, IHT) are non-differentiable due to reliance on the discrete, non-smooth argsort operation, hindering their integration into deep neural networks. Method: This paper proposes a differentiable permutation approximation framework based on SoftSort, yielding fully differentiable variants—Soft-OMP and Soft-IHT—where hard thresholding and index selection are replaced by continuous soft permutations. We further design end-to-end trainable architectures, OMP-Net and IHT-Net, incorporating structure-aware learnable weights to capture implicit data sparsity patterns. Contribution/Results: We provide theoretical guarantees on controllable approximation accuracy relative to the original greedy algorithms. Experiments demonstrate state-of-the-art performance in compressive sensing and other sparse reconstruction tasks, achieving fully differentiable, end-to-end trainable sparse recovery. To our knowledge, this is the first work to unify soft sorting with algorithm unrolling, establishing a principled differentiable integration pathway for greedy sparse algorithms within deep learning frameworks.

Technology Category

Application Category

📝 Abstract
Gradient-based learning imposes (deep) neural networks to be differentiable at all steps. This includes model-based architectures constructed by unrolling iterations of an iterative algorithm onto layers of a neural network, known as algorithm unrolling. However, greedy sparse recovery algorithms depend on the non-differentiable argsort operator, which hinders their integration into neural networks. In this paper, we address this challenge in Orthogonal Matching Pursuit (OMP) and Iterative Hard Thresholding (IHT), two popular representative algorithms in this class. We propose permutation-based variants of these algorithms and approximate permutation matrices using"soft"permutation matrices derived from softsort, a continuous relaxation of argsort. We demonstrate -- both theoretically and numerically -- that Soft-OMP and Soft-IHT, as differentiable counterparts of OMP and IHT and fully compatible with neural network training, effectively approximate these algorithms with a controllable degree of accuracy. This leads to the development of OMP- and IHT-Net, fully trainable network architectures based on Soft-OMP and Soft-IHT, respectively. Finally, by choosing weights as"structure-aware"trainable parameters, we connect our approach to structured sparse recovery and demonstrate its ability to extract latent sparsity patterns from data.
Problem

Research questions and friction points this paper is trying to address.

Differentiable argsort for greedy sparse recovery algorithms
Soft permutation matrices to replace non-differentiable argsort
Trainable neural networks for structured sparse recovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Soft permutation matrices replace argsort
Differentiable Soft-OMP and Soft-IHT variants
Trainable OMP- and IHT-Net architectures
🔎 Similar Papers
No similar papers found.
S
Sina Mohammad-Taheri
Department of Mathematics and Statistics, Concordia University, Montréal, QC, Canada
M
Matthew J. Colbrook
Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, Cambridgeshire, UK
Simone Brugiapaglia
Simone Brugiapaglia
Associate Professor, Concordia University, Department of Mathematics and Statistics
Numerical AnalysisMathematics of Data ScienceMachine LearningComputational Mathematics