🤖 AI Summary
To address the high computational complexity and overfitting arising from exponential growth of kernel parameters in high-order Volterra system identification, this paper proposes a Bayesian tensor network approach. It compresses Volterra kernels via canonical polyadic (CP) decomposition and incorporates hierarchical sparsity-inducing priors to enable automatic rank selection and decayed-memory modeling. Full Bayesian inference is employed to quantify predictive uncertainty rigorously. The method achieves interpretability through structured tensor representations and adaptive regularization without manual hyperparameter tuning, while naturally yielding calibrated uncertainty estimates at no additional computational cost. Experiments demonstrate that the proposed approach maintains competitive accuracy while substantially reducing computational overhead; moreover, its uncertainty quantification proves more reliable than conventional alternatives. Overall, it provides an efficient, robust, and probabilistic framework for nonlinear system modeling.
📝 Abstract
Modeling nonlinear systems with Volterra series is challenging because the number of kernel coefficients grows exponentially with the model order. This work introduces Bayesian Tensor Network Volterra kernel machines (BTN-V), extending the Bayesian Tensor Network framework to Volterra system identification. BTN-V represents Volterra kernels using canonical polyadic decomposition, reducing model complexity from O(I^D) to O(DIR). By treating all tensor components and hyperparameters as random variables, BTN-V provides predictive uncertainty estimation at no additional computational cost. Sparsity-inducing hierarchical priors enable automatic rank determination and the learning of fading-memory behavior directly from data, improving interpretability and preventing overfitting. Empirical results demonstrate competitive accuracy, enhanced uncertainty quantification, and reduced computational cost.