Efficient Model-Based Deep Learning via Network Pruning and Fine-Tuning

📅 2023-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and poor scalability of model-based deep learning (MBDL) in imaging inverse problems—stemming from its iterative architecture—this work pioneers the integration of structured pruning into MBDL networks. We propose three fine-grained fine-tuning strategies, each tailored to distinct training paradigms, to jointly achieve model compression and accuracy preservation. Our approach is compatible with both deep equilibrium models (DEQ) and deep unrolling (DU), two dominant MBDL frameworks. On DEQ and DU, it delivers 50% and 32% inference speedup, respectively, with PSNR degradation under 0.1 dB—effectively negligible. This work establishes a new, efficient, and robust lightweighting paradigm for practical deployment of MBDL methods.
📝 Abstract
Model-based deep learning (MBDL) is a powerful methodology for designing deep models to solve imaging inverse problems. MBDL networks can be seen as iterative algorithms that estimate the desired image using a physical measurement model and a learned image prior specified using a convolutional neural net (CNNs). The iterative nature of MBDL networks increases the test-time computational complexity, which limits their applicability in certain large-scale applications. Here we make two contributions to address this issue: First, we show how structured pruning can be adopted to reduce the number of parameters in MBDL networks. Second, we present three methods to fine-tune the pruned MBDL networks to mitigate potential performance loss. Each fine-tuning strategy has a unique benefit that depends on the presence of a pre-trained model and a high-quality ground truth. We show that our pruning and fine-tuning approach can accelerate image reconstruction using popular deep equilibrium learning (DEQ) and deep unfolding (DU) methods by 50% and 32%, respectively, with nearly no performance loss. This work thus offers a step forward for solving inverse problems by showing the potential of pruning to improve the scalability of MBDL. Code is available at https://github.com/wustl-cig/MBDL_Pruning .
Problem

Research questions and friction points this paper is trying to address.

Reducing computational complexity in model-based deep learning networks
Mitigating performance loss after network pruning via fine-tuning
Accelerating image reconstruction in inverse problems using pruning techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured pruning reduces MBDL network parameters
Fine-tuning mitigates pruned network performance loss
Accelerates image reconstruction with minimal performance loss
🔎 Similar Papers
No similar papers found.