From Deep Additive Kernel Learning to Last-Layer Bayesian Neural Networks via Induced Prior Approximation

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational complexity of deep kernel learning (DKL) under high-dimensional inputs, this paper proposes the Deep Additive Kernel (DAK) model. DAK integrates additive kernel structure with inducing-point-based prior approximations for the first time, enabling the final Gaussian process layer to naturally reduce to a scalable Bayesian neural network (BNN). This design preserves model interpretability while substantially reducing computational overhead. Methodologically, DAK unifies DKL and last-layer BNN frameworks within a single principled architecture. Empirically, it outperforms state-of-the-art DKL methods on both regression and classification benchmarks, achieving superior predictive accuracy, well-calibrated uncertainty estimates, and improved training scalability—without sacrificing expressiveness or theoretical grounding.

Technology Category

Application Category

📝 Abstract
With the strengths of both deep learning and kernel methods like Gaussian Processes (GPs), Deep Kernel Learning (DKL) has gained considerable attention in recent years. From the computational perspective, however, DKL becomes challenging when the input dimension of the GP layer is high. To address this challenge, we propose the Deep Additive Kernel (DAK) model, which incorporates i) an additive structure for the last-layer GP; and ii) induced prior approximation for each GP unit. This naturally leads to a last-layer Bayesian neural network (BNN) architecture. The proposed method enjoys the interpretability of DKL as well as the computational advantages of BNN. Empirical results show that the proposed approach outperforms state-of-the-art DKL methods in both regression and classification tasks.
Problem

Research questions and friction points this paper is trying to address.

Address high-dimensional GP input challenges
Propose Deep Additive Kernel model
Enhance interpretability and computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep Additive Kernel model
Induced prior approximation
Last-layer Bayesian neural network
🔎 Similar Papers
No similar papers found.