🤖 AI Summary
This study addresses the performance degradation in sparse inverse problems caused by mismatches between prior assumptions and the true sparsity structure of data. To this end, the authors establish a unified experimental framework to systematically compare L1 regularization with the Variational Garrote (VG) method across tasks including signal resampling, denoising, and sparse-view computed tomography. By incorporating a variational binary gating mechanism that approximates the L0 norm, VG substantially enhances support recovery and generalization under highly underdetermined conditions. Empirical results demonstrate that VG consistently achieves lower minimum generalization error and greater stability than L1 regularization, with its advantages becoming especially pronounced in highly sparse regimes—highlighting the efficacy of spike-and-slab–type priors in such settings.
📝 Abstract
Sparse regularization plays a central role in solving inverse problems arising from incomplete or corrupted measurements. Different regularizers correspond to different prior assumptions about the structure of the unknown signal, and reconstruction performance depends on how well these priors match the intrinsic sparsity of the data. This work investigates the effect of sparsity priors in inverse problems by comparing conventional L1 regularization with the Variational Garrote (VG), a probabilistic method that approximates L0 sparsity through variational binary gating variables. A unified experimental framework is constructed across multiple reconstruction tasks including signal resampling, signal denoising, and sparse-view computed tomography. To enable consistent comparison across models with different parameterizations, regularization strength is swept across wide ranges and reconstruction behavior is analyzed through train-generalization error curves. Experiments reveal characteristic bias-variance tradeoff patterns across tasks and demonstrate that VG frequently achieves lower minimum generalization error and improved stability in strongly underdetermined regimes where accurate support recovery is critical. These results suggest that sparsity priors closer to spike-and-slab structure can provide advantages when the underlying coefficient distribution is strongly sparse. The study highlights the importance of prior-data alignment in sparse inverse problems and provides empirical insights into the behavior of variational L0-type methods across different information bottlenecks.