Score-Based Model for Low-Rank Tensor Recovery

📅 2025-06-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional low-rank tensor decomposition methods (e.g., CP, Tucker) rely on prespecified structural assumptions and restrictive distributional priors (e.g., Dirac delta), limiting modeling flexibility and recovery accuracy. Method: We propose a structure-agnostic, score-driven tensor recovery framework: a neural network parameterizes an energy function, and score matching is employed to learn the joint log-probability gradient with respect to tensor entries and shared latent factors; combined with smoothed regularization, a block-coordinate descent algorithm unifies tensor completion and denoising. Contribution/Results: Our approach eliminates fixed shrinkage rules and explicit low-rank structural assumptions, enabling adaptive, distribution-agnostic modeling of sparse, continuous-time, and visual tensors. Experiments on diverse real-world datasets demonstrate significant improvements over state-of-the-art decomposition and generative models, achieving both superior expressivity and high-fidelity reconstruction.

Technology Category

Application Category

📝 Abstract
Low-rank tensor decompositions (TDs) provide an effective framework for multiway data analysis. Traditional TD methods rely on predefined structural assumptions, such as CP or Tucker decompositions. From a probabilistic perspective, these can be viewed as using Dirac delta distributions to model the relationships between shared factors and the low-rank tensor. However, such prior knowledge is rarely available in practical scenarios, particularly regarding the optimal rank structure and contraction rules. The optimization procedures based on fixed contraction rules are complex, and approximations made during these processes often lead to accuracy loss. To address this issue, we propose a score-based model that eliminates the need for predefined structural or distributional assumptions, enabling the learning of compatibility between tensors and shared factors. Specifically, a neural network is designed to learn the energy function, which is optimized via score matching to capture the gradient of the joint log-probability of tensor entries and shared factors. Our method allows for modeling structures and distributions beyond the Dirac delta assumption. Moreover, integrating the block coordinate descent (BCD) algorithm with the proposed smooth regularization enables the model to perform both tensor completion and denoising. Experimental results demonstrate significant performance improvements across various tensor types, including sparse and continuous-time tensors, as well as visual data.
Problem

Research questions and friction points this paper is trying to address.

Overcoming predefined structural assumptions in tensor decompositions
Learning tensor-factor compatibility without prior distribution knowledge
Enhancing tensor recovery accuracy via score-based modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Score-based model learns tensor compatibility
Neural network optimizes energy via score matching
BCD algorithm enables completion and denoising
🔎 Similar Papers
No similar papers found.
Z
Zhengyun Cheng
School of Electronics and Information, Northwestern Polytechnical University
Changhao Wang
Changhao Wang
UC Berkeley
RoboticsReinforcement LearningControl
G
Guanwen Zhang
School of Electronics and Information, Northwestern Polytechnical University
Y
Yi Xu
School of Control Science and Engineering, Dalian University of Technology
W
Wei Zhou
School of Electronics and Information, Northwestern Polytechnical University
X
Xiangyang Ji
Department of Automation, Tsinghua University