🤖 AI Summary
This work addresses the lack of theoretical foundations for adaptive iterative soft-thresholding algorithms (ISTA) in LASSO problems. We propose and analyze an adaptive ISTA that estimates noise level via the median absolute deviation (MAD) and obviates explicit regularization parameter λ tuning. Methodologically, it achieves noise-adaptive thresholding by dynamically adjusting the shrinkage intensity based on MAD. Theoretical contributions include: (i) characterizing fixed points as scale-invariant, non-unique, and locally stable; (ii) establishing the first local linear convergence guarantee; and (iii) developing a unified framework for global convergence analysis. By eliminating manual parameter selection, the algorithm ensures both stability and computational efficiency while preserving theoretical rigor. Extensive experiments validate the correctness of the convergence analysis and demonstrate practical efficacy across diverse problem instances.
📝 Abstract
The adaptive Iterative Soft-Thresholding Algorithm (ISTA) has been a popular algorithm for finding a desirable solution to the LASSO problem without explicitly tuning the regularization parameter $λ$. Despite that the adaptive ISTA is a successful practical algorithm, few theoretical results exist. In this paper, we present the theoretical analysis on the adaptive ISTA with the thresholding strategy of estimating noise level by median absolute deviation. We show properties of the fixed points of the algorithm, including scale equivariance, non-uniqueness, and local stability, prove the local linear convergence guarantee, and show its global convergence behavior.