DeepConvContext: A Multi-Scale Approach to Timeseries Classification in Human Activity Recognition

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional human activity recognition (HAR) relies on sliding-window-based independent classification, which fails to capture long-range temporal dependencies across windows. To address this, we propose a multi-scale sliding-window sequence modeling framework based on Long Short-Term Memory (LSTM), abandoning the conventional paradigm of isolated per-window classification. Our approach explicitly incorporates cross-window temporal context into HAR for the first time, enabling joint learning of intra-window local patterns and inter-window global dynamics. Extensive experiments on six mainstream HAR benchmarks demonstrate an average 10% improvement in F1-score, with gains up to 21%. Ablation studies further confirm that LSTM significantly outperforms attention-based mechanisms in modeling inertial sensor time-series data. The implementation is fully open-sourced, ensuring strong reproducibility.

Technology Category

Application Category

📝 Abstract
Despite recognized limitations in modeling long-range temporal dependencies, Human Activity Recognition (HAR) has traditionally relied on a sliding window approach to segment labeled datasets. Deep learning models like the DeepConvLSTM typically classify each window independently, thereby restricting learnable temporal context to within-window information. To address this constraint, we propose DeepConvContext, a multi-scale time series classification framework for HAR. Drawing inspiration from the vision-based Temporal Action Localization community, DeepConvContext models both intra- and inter-window temporal patterns by processing sequences of time-ordered windows. Unlike recent HAR models that incorporate attention mechanisms, DeepConvContext relies solely on LSTMs -- with ablation studies demonstrating the superior performance of LSTMs over attention-based variants for modeling inertial sensor data. Across six widely-used HAR benchmarks, DeepConvContext achieves an average 10% improvement in F1-score over the classic DeepConvLSTM, with gains of up to 21%. Code to reproduce our experiments is publicly available via github.com/mariusbock/context_har.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations in modeling long-range temporal dependencies in HAR
Proposes multi-scale framework to capture intra- and inter-window patterns
Improves F1-score by 10-21% over DeepConvLSTM on HAR benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-scale time series classification framework
Models intra- and inter-window temporal patterns
Uses LSTMs for superior inertial sensor performance
🔎 Similar Papers
No similar papers found.