🤖 AI Summary
Tucker decomposition requires manual pre-specification of multiranks, compromising the balance between model parsimony and structural expressiveness. Method: We propose a Bayesian adaptive Tucker decomposition model featuring (i) an infinite递增收缩 prior for fully automatic, data-driven multirank inference; (ii) a local sparsity prior on the core tensor to jointly capture inter-variable dependencies and intrinsic low-dimensional structure; and (iii) unified handling of continuous/binary data with joint missing-value imputation, implemented via adaptive Gibbs sampling for computational efficiency. Contribution/Results: We establish posterior consistency theoretically. Empirical evaluation on chemometric and complex ecological datasets demonstrates substantial improvements over state-of-the-art methods in imputation accuracy, robustness, and interpretability—without requiring rank tuning.
📝 Abstract
Tucker tensor decomposition offers a more effective representation for multiway data compared to the widely used PARAFAC model. However, its flexibility brings the challenge of selecting the appropriate latent multi-rank. To overcome the issue of pre-selecting the latent multi-rank, we introduce a Bayesian adaptive Tucker decomposition model that infers the multi-rank automatically via an infinite increasing shrinkage prior. The model introduces local sparsity in the core tensor, inducing rich and at the same time parsimonious dependency structures. Posterior inference proceeds via an efficient adaptive Gibbs sampler, supporting both continuous and binary data and allowing for straightforward missing data imputation when dealing with incomplete multiway data. We discuss fundamental properties of the proposed modeling framework, providing theoretical justification. Simulation studies and applications to chemometrics and complex ecological data offer compelling evidence of its advantages over existing tensor factorization methods.