Efficient Network Automatic Relevance Determination

📅 2025-06-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly modeling input feature sparsity and output structural correlations in high-dimensional multi-output regression, this paper proposes the NARD framework and two efficient variants—Sequential NARD and Surrogate Function NARD. Built upon a matrix-normal prior and automatic relevance determination (ARD), the method introduces a marginal likelihood surrogate objective and a sequential feature update mechanism, enabling simultaneous automatic feature selection and structured covariance learning within a unified Bayesian framework. Theoretically and empirically, the approach reduces per-iteration time complexity from $O(m^3 + d^3)$ to $O(m^3 + p^2)$, where $p ll d$, significantly improving scalability. Experiments on synthetic and real-world benchmarks demonstrate both high predictive accuracy and strong sparsity, enhancing both scalability and interpretability for large-scale multi-output regression.

Technology Category

Application Category

📝 Abstract
We propose Network Automatic Relevance Determination (NARD), an extension of ARD for linearly probabilistic models, to simultaneously model sparse relationships between inputs $X in mathbb R^{d imes N}$ and outputs $Y in mathbb R^{m imes N}$, while capturing the correlation structure among the $Y$. NARD employs a matrix normal prior which contains a sparsity-inducing parameter to identify and discard irrelevant features, thereby promoting sparsity in the model. Algorithmically, it iteratively updates both the precision matrix and the relationship between $Y$ and the refined inputs. To mitigate the computational inefficiencies of the $mathcal O(m^3 + d^3)$ cost per iteration, we introduce Sequential NARD, which evaluates features sequentially, and a Surrogate Function Method, leveraging an efficient approximation of the marginal likelihood and simplifying the calculation of determinant and inverse of an intermediate matrix. Combining the Sequential update with the Surrogate Function method further reduces computational costs. The computational complexity per iteration for these three methods is reduced to $mathcal O(m^3+p^3)$, $mathcal O(m^3 + d^2)$, $mathcal O(m^3+p^2)$, respectively, where $p ll d$ is the final number of features in the model. Our methods demonstrate significant improvements in computational efficiency with comparable performance on both synthetic and real-world datasets.
Problem

Research questions and friction points this paper is trying to address.

Model sparse input-output relationships with correlated outputs
Reduce computational cost of feature selection in large datasets
Improve efficiency via sequential updates and surrogate approximations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Matrix normal prior induces sparsity in model
Sequential NARD reduces feature evaluation cost
Surrogate Function Method approximates likelihood efficiently
🔎 Similar Papers
No similar papers found.
H
Hongwei Zhang
Artificial Intelligence Innovation and Incubation Institute, Fudan University, Shanghai, China; School of Data Science, Fudan University, Shanghai, China; Shanghai Academy of Artificial Intelligence for Science, Shanghai, China
Z
Ziqi Ye
Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, United Kingdom; Shanghai Academy of Artificial Intelligence for Science, Shanghai, China
X
Xinyuan Wang
Eberly College of Science, Pennsylvania State University, PA, United States; Shanghai Academy of Artificial Intelligence for Science, Shanghai, China
X
Xin Guo
Shanghai Academy of Artificial Intelligence for Science, Shanghai, China
Zenglin Xu
Zenglin Xu
Fudan University
Machine LearningTrustworthy AIFederated LearningLarge Language ModelsTime Series Analysis
Y
Yuan Cheng
Artificial Intelligence Innovation and Incubation Institute, Fudan University, Shanghai, China; Shanghai Academy of Artificial Intelligence for Science, Shanghai, China
Zixin Hu
Zixin Hu
Associate Professor, Fudan University
Y
Yuan Qi
Artificial Intelligence Innovation and Incubation Institute, Fudan University, Shanghai, China; Zhongshan Hospital, Fudan University, Shanghai, China; Shanghai Academy of Artificial Intelligence for Science, Shanghai, China