🤖 AI Summary
This work proposes Functional Tucker Decomposition (FTD) to address the limitation of traditional tensor decomposition, which loses essential continuity structures when discretizing multidimensional data generated by continuous processes. By embedding Reproducing Kernel Hilbert Spaces (RKHS) into the Tucker framework, FTD adaptively learns expressive factors through mode-wise continuity constraints, enabling the modeling of continuous patterns without requiring predefined basis functions while preserving the multilinear subspace structure. The method uniquely unifies structural fidelity with functional flexibility. Experimental results on hyperspectral image classification and multivariate time series analysis demonstrate that FTD significantly outperforms existing approaches, validating the effectiveness of integrating continuous function modeling with tensor decomposition.
📝 Abstract
Tensors provide a structured representation for multidimensional data, yet discretization can obscure important information when such data originates from continuous processes. We address this limitation by introducing a functional Tucker decomposition (FTD) that embeds mode-wise continuity constraints directly into the decomposition. The FTD employs reproducing kernel Hilbert spaces (RKHS) to model continuous modes without requiring an a-priori basis, while preserving the multi-linear subspace structure of the Tucker model. Through RKHS-driven representation, the model yields adaptive and expressive factor descriptions that enable targeted modeling of subspaces. The value of this approach is demonstrated in domain-variant tensor classification. In particular, we illustrate its effectiveness with classification tasks in hyperspectral imaging and multivariate time series analysis, highlighting the benefits of combining structural decomposition with functional adaptability.