Enhancing Time Series Classification with Diversity-Driven Neural Network Ensembles

📅 2025-06-30
🏛️ IEEE International Joint Conference on Neural Network
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural network ensemble methods for time series classification often lack explicit mechanisms to enforce diversity, leading to feature redundancy among constituent models and limiting overall ensemble performance. To address this limitation, this work proposes a diversity-driven ensemble framework that introduces, for the first time in this task, a feature orthogonality loss to explicitly encourage member models to learn complementary feature representations through decorrelated learning. Evaluated on 128 UCR benchmark datasets, the proposed method achieves state-of-the-art performance with fewer ensemble members, significantly improving classification accuracy, computational efficiency, and scalability compared to existing approaches.

Technology Category

Application Category

📝 Abstract
Ensemble methods have played a crucial role in achieving state-of-the-art (SOTA) performance across various machine learning tasks by leveraging the diversity of features learned by individual models. In Time Series Classification (TSC), ensembles have proven highly effective whether based on neural networks (NNs) or traditional methods like HIVE-COTE. However most existing NN-based ensemble methods for TSC train multiple models with identical architectures and configurations. These ensembles aggregate predictions without explicitly promoting diversity which often leads to redundant feature representations and limits the benefits of ensembling. In this work, we introduce a diversity-driven ensemble learning framework that explicitly encourages feature diversity among neural network ensemble members. Our approach employs a decorrelated learning strategy using a feature orthogonality loss applied directly to the learned feature representations. This ensures that each model in the ensemble captures complementary rather than redundant information. We evaluate our framework on 128 datasets from the UCR archive and show that it achieves SOTA performance with fewer models. This makes our method both efficient and scalable compared to conventional NN-based ensemble approaches.
Problem

Research questions and friction points this paper is trying to address.

Time Series Classification
Neural Network Ensembles
Feature Diversity
Ensemble Redundancy
Model Diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

diversity-driven ensemble
feature orthogonality loss
time series classification
neural network ensemble
decorrelated learning
🔎 Similar Papers
No similar papers found.