Con4m: Context-aware Consistency Learning Framework for Segmented Time Series Classification

πŸ“… 2024-07-31
πŸ›οΈ Neural Information Processing Systems
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Multi-class variable-duration (MVD) segmented time series classification faces two key challenges: (1) neglect of temporal dependencies between adjacent segments and (2) inconsistent annotation boundaries. Method: Departing from the i.i.d. assumption, this work formally provesβ€” for the first timeβ€”the discriminative gain conferred by contextual information. We propose a two-level context-prior-driven consistency learning framework integrating: (i) context-aware consistency regularization, (ii) dynamic neighborhood contrastive learning, (iii) soft boundary label smoothing, and (iv) temporal-adaptive feature alignment. Contribution/Results: These components jointly model inter-segment temporal dependencies and mitigate boundary annotation noise. Evaluated on multiple benchmark datasets, our method achieves average accuracy improvements of 3.2%–7.8%, demonstrating significantly enhanced robustness and tolerance to labeling noise.

Technology Category

Application Category

πŸ“ Abstract
Time Series Classification (TSC) encompasses two settings: classifying entire sequences or classifying segmented subsequences. The raw time series for segmented TSC usually contain Multiple classes with Varying Duration of each class (MVD). Therefore, the characteristics of MVD pose unique challenges for segmented TSC, yet have been largely overlooked by existing works. Specifically, there exists a natural temporal dependency between consecutive instances (segments) to be classified within MVD. However, mainstream TSC models rely on the assumption of independent and identically distributed (i.i.d.), focusing on independently modeling each segment. Additionally, annotators with varying expertise may provide inconsistent boundary labels, leading to unstable performance of noise-free TSC models. To address these challenges, we first formally demonstrate that valuable contextual information enhances the discriminative power of classification instances. Leveraging the contextual priors of MVD at both the data and label levels, we propose a novel consistency learning framework Con4m, which effectively utilizes contextual information more conducive to discriminating consecutive segments in segmented TSC tasks, while harmonizing inconsistent boundary labels for training. Extensive experiments across multiple datasets validate the effectiveness of Con4m in handling segmented TSC tasks on MVD. The source code is available at https://github.com/MrNobodyCali/Con4m.
Problem

Research questions and friction points this paper is trying to address.

Addresses challenges in segmented time series classification with varying class durations
Leverages contextual information to improve discriminative power of classification
Harmonizes inconsistent boundary labels for stable model training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Context-aware learning for time series classification
Consistency framework harmonizing inconsistent labels
Utilizing contextual priors at data and label levels
πŸ”Ž Similar Papers
No similar papers found.
J
Junru Chen
Zhejiang University
T
Tianyu Cao
Zhejiang University
J
Jing Xu
State Grid Power Supply Co. Ltd.
J
Jiahe Li
Zhejiang University
Zhilong Chen
Zhilong Chen
Tsinghua University
Social ComputingComputational Social Science
Tao Xiao
Tao Xiao
Kyushu University
Software Engineering
Y
Yang Yang
Zhejiang University