General Pruning Criteria for Fast SBL

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the hyperparameter analysis problem in sparse Bayesian learning (SBL) under relaxed Gaussian assumptions on noise and weights. We propose a generalized pruning criterion derived from rigorous analysis of how individual hyperparameters influence the marginal likelihood function, establishing sufficient conditions for their estimates to converge either to finite values or infinity. This reveals a universal pruning mechanism underlying fast SBL algorithms. In the Gaussian special case, our criterion reduces to the classical F-SBL condition, confirming theoretical consistency; crucially, it remains valid beyond Gaussian settings, substantially broadening SBL’s modeling flexibility and applicability. Our approach builds upon marginal maximum-likelihood estimation and asymptotic hyperparameter analysis, thereby weakening restrictive distributional assumptions and enhancing model interpretability and robustness.

Technology Category

Application Category

📝 Abstract
Sparse Bayesian learning (SBL) associates to each weight in the underlying linear model a hyperparameter by assuming that each weight is Gaussian distributed with zero mean and precision (inverse variance) equal to its associated hyperparameter. The method estimates the hyperparameters by marginalizing out the weights and performing (marginalized) maximum likelihood (ML) estimation. SBL returns many hyperparameter estimates to diverge to infinity, effectively setting the estimates of the corresponding weights to zero (i.e., pruning the corresponding weights from the model) and thereby yielding a sparse estimate of the weight vector. In this letter, we analyze the marginal likelihood as function of a single hyperparameter while keeping the others fixed, when the Gaussian assumptions on the noise samples and the weight distribution that underlies the derivation of SBL are weakened. We derive sufficient conditions that lead, on the one hand, to finite hyperparameter estimates and, on the other, to infinite ones. Finally, we show that in the Gaussian case, the two conditions are complementary and coincide with the pruning condition of fast SBL (F-SBL), thereby providing additional insights into this algorithm.
Problem

Research questions and friction points this paper is trying to address.

Analyzing marginal likelihood under weakened Gaussian assumptions
Deriving conditions for finite and infinite hyperparameter estimates
Providing insights into pruning conditions for fast SBL algorithm
Innovation

Methods, ideas, or system contributions that make the work stand out.

General pruning criteria for sparse Bayesian learning
Analyzing marginal likelihood under weakened Gaussian assumptions
Deriving sufficient conditions for finite and infinite hyperparameters
🔎 Similar Papers
J
Jakob Möderl
Graz University of Technology, Graz, Austria
Erik Leitinger
Erik Leitinger
Assistant Professor, Graz University of Technology
Detection and EstimationStatistical Signal ProcessingGraphical ModelsData FusionLocalization
B
Bernard Henri Fleury
Technische Universität Wien, Wien, Austria