CAFE: Channel-Autoregressive Factorized Encoding for Robust Biosignal Spatial Super-Resolution

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of spatial super-resolution reconstruction from low-density biosignal recordings, which is often compromised by sparse noise that induces artifact propagation and spurious non-local correlations. To this end, the authors propose CAFE, a novel channel autoregressive factorized encoding framework that reconstructs full-lead signals progressively from proximal to distal electrodes through a geometrically aligned, staged autoregressive expansion mechanism. The method integrates grouped supervision, curriculum sampling, and teacher-forcing scheduling to effectively mitigate exposure bias while enabling parallel computation, and it is compatible with diverse temporal backbone architectures—including MLPs, convolutions, and Transformers. Extensive experiments across four modalities and six datasets demonstrate that CAFE consistently outperforms five baseline methods under three distinct backbones, exhibiting superior generalization capability and reconstruction fidelity.

Technology Category

Application Category

📝 Abstract
High-density biosignal recordings are critical for neural decoding and clinical monitoring, yet real-world deployments often rely on low-density (LD) montages due to hardware and operational constraints. This motivates spatial super-resolution from LD observations, but heterogeneous dependencies under sparse and noisy measurements often lead to artifact propagation and false non-local correlations. To address this, we propose CAFE, a plug-and-play rollout generation scheme that reconstructs the full montage in geometry-aligned stages. Starting from the LD channels, CAFE first recovers nearby channels and then progressively expands to more distal regions, exploiting reliable local structure before introducing non-local interactions. During training, step-wise supervision is applied over channel groups and teacher forcing with epoch-level scheduled sampling along the group dimension is utilized to reduce exposure bias, enabling parallel computation across steps. At test time, CAFE performs an autoregressive rollout across groups, while remaining plug-and-play by reusing any temporal backbone as the shared predictor. Evaluated on $4$ modalities and $6$ datasets, CAFE demonstrates plug-and-play generality across $3$ backbones (MLP, Conv, Transformer) and achieves consistently better reconstruction than $5$ representative baselines.
Problem

Research questions and friction points this paper is trying to address.

spatial super-resolution
biosignal
low-density montage
artifact propagation
non-local correlations
Innovation

Methods, ideas, or system contributions that make the work stand out.

spatial super-resolution
channel-autoregressive
plug-and-play
scheduled sampling
geometry-aligned rollout
🔎 Similar Papers
No similar papers found.
Hongjun Liu
Hongjun Liu
NYU Ph.D. in Data Science
Natural Language ProcessMultimodal Reasoning
L
Leyu Zhou
School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China
Z
Zijianghao Yang
School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China
Rujun Han
Rujun Han
Google
NLPMachine Learning
S
Shitong Duan
School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
K
Kuanjian Tang
School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
Chao Yao
Chao Yao
Northwestern polytechnical university