Interpretable Bayesian Tensor Network Kernel Machines with Automatic Rank and Feature Selection

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing tensor network (TN)-based kernel methods are predominantly deterministic, hindering principled quantification of parameter uncertainty and requiring manual tuning of hyperparameters such as tensor rank and feature dimensionality. To address these limitations, we propose a Bayesian tensor network kernel learning framework. Our method employs hierarchical sparsity-inducing priors and mean-field variational inference to jointly and automatically infer tensor rank, salient features, and model complexity—enabling adaptive regularization without cross-validation. Crucially, it provides calibrated uncertainty estimates for model parameters while preserving interpretability and computational efficiency. Extensive experiments on synthetic and real-world datasets demonstrate significant improvements in predictive accuracy, uncertainty calibration, robustness in feature identification, and scalability. The framework establishes a generalizable, probabilistic TN-based paradigm for kernel learning.

Technology Category

Application Category

📝 Abstract
Tensor Network (TN) Kernel Machines speed up model learning by representing parameters as low-rank TNs, reducing computation and memory use. However, most TN-based Kernel methods are deterministic and ignore parameter uncertainty. Further, they require manual tuning of model complexity hyperparameters like tensor rank and feature dimensions, often through trial-and-error or computationally costly methods like cross-validation. We propose Bayesian Tensor Network Kernel Machines, a fully probabilistic framework that uses sparsity-inducing hierarchical priors on TN factors to automatically infer model complexity. This enables automatic inference of tensor rank and feature dimensions, while also identifying the most relevant features for prediction, thereby enhancing model interpretability. All the model parameters and hyperparameters are treated as latent variables with corresponding priors. Given the Bayesian approach and latent variable dependencies, we apply a mean-field variational inference to approximate their posteriors. We show that applying a mean-field approximation to TN factors yields a Bayesian ALS algorithm with the same computational complexity as its deterministic counterpart, enabling uncertainty quantification at no extra computational cost. Experiments on synthetic and real-world datasets demonstrate the superior performance of our model in prediction accuracy, uncertainty quantification, interpretability, and scalability.
Problem

Research questions and friction points this paper is trying to address.

Automatically infer model complexity in Tensor Network Kernel Machines
Enable automatic tensor rank and feature dimension selection
Enhance interpretability and uncertainty quantification without extra computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian TN Kernel Machines with automatic rank
Sparsity-inducing priors for feature selection
Mean-field variational inference for efficiency
🔎 Similar Papers
No similar papers found.
A
Afra Kilic
Delft Center for Systems and Control, Delft University of Technology, Delft, 2628 CN, The Netherlands
Kim Batselier
Kim Batselier
Delft University of Technology
Green AISystem identificationTensorsnonlinear systemsMachine learning