rETF-semiSL: Semi-Supervised Learning for Neural Collapse in Temporal Data

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the lack of geometric structure constraints in pretrained representations for time-series semi-supervised learning. To this end, we propose a neural collapse–guided pretraining framework. Methodologically, we jointly design a generative pretraining objective, temporal augmentation strategies, and a pseudo-labeling mechanism, while incorporating an Equiangular Tight Frame (ETF) classifier to enforce geometric alignment of latent representations. The framework is architecture-agnostic, supporting diverse backbones including LSTM, Transformer, and state-space models. Extensive experiments on three multivariate time-series classification benchmarks demonstrate that our approach significantly outperforms existing pretraining paradigms, achieving substantial improvements in downstream classification accuracy under limited labeling. These results validate the effectiveness and generalizability of neural collapse–oriented representation learning for time-series modeling.

Technology Category

Application Category

📝 Abstract
Deep neural networks for time series must capture complex temporal patterns, to effectively represent dynamic data. Self- and semi-supervised learning methods show promising results in pre-training large models, which -- when finetuned for classification -- often outperform their counterparts trained from scratch. Still, the choice of pretext training tasks is often heuristic and their transferability to downstream classification is not granted, thus we propose a novel semi-supervised pre-training strategy to enforce latent representations that satisfy the Neural Collapse phenomenon observed in optimally trained neural classifiers. We use a rotational equiangular tight frame-classifier and pseudo-labeling to pre-train deep encoders with few labeled samples. Furthermore, to effectively capture temporal dynamics while enforcing embedding separability, we integrate generative pretext tasks with our method, and we define a novel sequential augmentation strategy. We show that our method significantly outperforms previous pretext tasks when applied to LSTMs, transformers, and state-space models on three multivariate time series classification datasets. These results highlight the benefit of aligning pre-training objectives with theoretically grounded embedding geometry.
Problem

Research questions and friction points this paper is trying to address.

Enforcing Neural Collapse in semi-supervised temporal data learning
Improving transferability of pretext tasks to downstream classification
Capturing temporal dynamics while ensuring embedding separability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-supervised pre-training with Neural Collapse
Rotational equiangular tight frame-classifier and pseudo-labeling
Generative pretext tasks with sequential augmentation
🔎 Similar Papers
No similar papers found.