Neural Operator-Grounded Continuous Tensor Function Representation and Its Applications

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing tensor function representations are limited by discrete and linear mode-n products, which struggle to flexibly model complex continuous data. This work proposes NO-CTR, a neural operator–driven framework for continuous tensor function representation, which introduces neural operators into tensor function modeling for the first time. NO-CTR constructs continuous, nonlinear mode-n operators that directly map continuous core functions to target tensor functions. The framework is theoretically proven to possess universal approximation capability. Extensive experiments on multidimensional data—including multispectral images, color videos, Sentinel-2 imagery, and point clouds—demonstrate that NO-CTR significantly outperforms existing methods in both representational capacity and reconstruction accuracy.

Technology Category

Application Category

📝 Abstract
Recently, continuous tensor functions have attracted increasing attention, because they can unifiedly represent data both on mesh grids and beyond mesh grids. However, since mode-$n$ product is essentially discrete and linear, the potential of current continuous tensor function representations is still locked. To break this bottleneck, we suggest neural operator-grounded mode-$n$ operators as a continuous and nonlinear alternative of discrete and linear mode-$n$ product. Instead of mapping the discrete core tensor to the discrete target tensor, proposed mode-$n$ operator directly maps the continuous core tensor function to the continuous target tensor function, which provides a genuine continuous representation of real-world data and can ameliorate discretization artifacts. Empowering with continuous and nonlinear mode-$n$ operators, we propose a neural operator-grounded continuous tensor function representation (abbreviated as NO-CTR), which can more faithfully represent complex real-world data compared with classic discrete tensor representations and continuous tensor function representations. Theoretically, we also prove that any continuous tensor function can be approximated by NO-CTR. To examine the capability of NO-CTR, we suggest an NO-CTR-based multi-dimensional data completion model. Extensive experiments across various data on regular mesh grids (multi-spectral images and color videos), on mesh girds with different resolutions (Sentinel-2 images) and beyond mesh grids (point clouds) demonstrate the superiority of NO-CTR.
Problem

Research questions and friction points this paper is trying to address.

continuous tensor function
mode-n product
discretization artifacts
nonlinear representation
real-world data representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

neural operator
continuous tensor function
nonlinear mode-n operator
mesh-free representation
tensor completion
🔎 Similar Papers
2023-10-19International Conference on Machine LearningCitations: 5
2024-06-10arXiv.orgCitations: 14
R
Ruoyang Su
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
Xi-Le Zhao
Xi-Le Zhao
University of Electronic Science and Technology of China
sparse and low-rank modeling for high-dimensional data analysis
S
Sheng Liu
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
W
Wei-Hao Wu
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
Yisi Luo
Yisi Luo
Xi'an Jiaotong University
computer vision
M
Michael K. Ng
Department of Mathematics, Hong Kong Baptist University, Hong Kong, China