π€ AI Summary
Existing functional tensor decomposition methods for high-dimensional continuous signals indexed by real-valued variables require pre-specifying the tensor rankβa parameter that is NP-hard to determine and lacks theoretical guarantees for approximation in the continuous domain.
Method: We propose the first general approximation theory for functional low-rank tensor models. Our approach introduces a Bayesian framework based on multi-output Gaussian processes and variational inference, enabling automatic, prior-free learning of the optimal rank. We further design closed-form update rules to enhance optimization efficiency.
Contribution/Results: Experiments on synthetic and real-world datasets demonstrate that our method significantly outperforms state-of-the-art alternatives, achieving breakthroughs in both modeling accuracy and adaptive rank selection. The theoretical foundation ensures robust approximation capability for continuous-domain functional tensors, while the data-driven rank estimation eliminates reliance on heuristic or manual rank specification.
π Abstract
Functional tensor decomposition can analyze multi-dimensional data with real-valued indices, paving the path for applications in machine learning and signal processing. A limitation of existing approaches is the assumption that the tensor rank-a critical parameter governing model complexity-is known. However, determining the optimal rank is a non-deterministic polynomial-time hard (NP-hard) task and there is a limited understanding regarding the expressive power of functional low-rank tensor models for continuous signals. We propose a rank-revealing functional Bayesian tensor completion (RR-FBTC) method. Modeling the latent functions through carefully designed multioutput Gaussian processes, RR-FBTC handles tensors with real-valued indices while enabling automatic tensor rank determination during the inference process. We establish the universal approximation property of the model for continuous multi-dimensional signals, demonstrating its expressive power in a concise format. To learn this model, we employ the variational inference framework and derive an efficient algorithm with closed-form updates. Experiments on both synthetic and real-world datasets demonstrate the effectiveness and superiority of the RR-FBTC over state-of-the-art approaches. The code is available at https://github.com/OceanSTARLab/RR-FBTC.