Look Into the LITE in Deep Learning for Time Series Classification

📅 2024-09-04
🏛️ International Journal of Data Science and Analysis
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address parameter redundancy, high energy consumption, and slow inference in time-series classification, this paper proposes LITE—a lightweight architecture with only 9,814 parameters—and its multivariate extension, LITEMV. LITE introduces a novel Light Inception with boosTing tEchnique (LITE) module, integrating depthwise separable convolution, multiplexing, customized filters, and dilated convolution; it is the first to adapt this design for multivariate time-series modeling and human rehabilitation motion recognition, augmented with Class Activation Mapping (CAM) for enhanced interpretability. On the UCR/UEA benchmarks, LITE achieves a mean accuracy of 84.62%, accelerates training by 2.78×, and reduces CO₂ emissions and power consumption by 2.79×. On the Kimore skeletal dataset, LITEMV attains state-of-the-art accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
Deep learning models have been shown to be a powerful solution for Time Series Classification (TSC). State-of-the-art architectures, while producing promising results on the UCR and the UEA archives, present a high number of trainable parameters. This can lead to long training with high CO2 emission, power consumption and possible increase in the number of FLoating-point Operation Per Second (FLOPS). In this paper, we present a new architecture for TSC, the Light Inception with boosTing tEchnique (LITE) with only $$2.34%$$ 2.34 % of the number of parameters of the state-of-the-art InceptionTime model, while preserving performance. This architecture, with only 9, 814 trainable parameters due to the usage of DepthWise Separable Convolutions (DWSC), is boosted by three techniques: multiplexing, custom filters, and dilated convolution. The LITE architecture, trained on the UCR, is 2.78 times faster than InceptionTime and consumes 2.79 times less CO2 and power, while achieving an average accuracy of $$84.62%$$ 84.62 % compared to $$84.91%$$ 84.91 % with InceptionTime. To evaluate the performance of the proposed architecture on multivariate time series data, we adapt LITE to handle multivariate time series, we call this version LITEMV. To bring theory into application, we also conducted experiments using LITEMV on multivariate time series representing human rehabilitation movements, showing that LITEMV not only is the most efficient model but also the best performing for this application on the Kimore dataset, a skeleton-based human rehabilitation exercises dataset. Moreover, to address the interpretability of LITEMV, we present a study using Class Activation Maps to understand the classification decision taken by the model during evaluation.
Problem

Research questions and friction points this paper is trying to address.

Time Series Classification
Energy Efficiency
Simplified Computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LITE model
time series classification
energy efficiency
🔎 Similar Papers
No similar papers found.
A
Ali Ismail-Fawaz
IRIMAS, Universite de Haute-Alsace, Mulhouse, France.
M
M. Devanne
IRIMAS, Universite de Haute-Alsace, Mulhouse, France.
Stefano Berretti
Stefano Berretti
Professor of Computer Engineering, University of Firenze, Italy
3D Computer VisionPattern RecognitionBiometricsMachine Learning
Jonathan Weber
Jonathan Weber
Full Professor of Computer Science, Université de Haute-Alsace
Data ScienceDeep LearningComputer VisionTime Series ClassificationArtificial Intelligence
G
G. Forestier
IRIMAS, Universite de Haute-Alsace, Mulhouse, France., DSAI, Monash University, Melbourne, Australia.