ConSense: Continually Sensing Human Activity with WiFi via Growing and Picking

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting in WiFi-based human activity recognition (HAR) under class-incremental learning—where novel activities are dynamically introduced, edge devices suffer from severe resource constraints, and historical samples cannot be stored—this paper proposes a sample-free class-incremental learning framework. Methodologically, we design a lightweight Transformer-based architecture: parameter-efficient growth is achieved via dynamic expansion of multi-head self-attention layers; selective retraining, guided by neuron stability, and incremental knowledge consolidation jointly preserve prior task performance without retaining old samples. Extensive experiments on three public WiFi HAR benchmarks demonstrate that our method reduces model parameters by 37% and improves average accuracy by 5.2% over state-of-the-art incremental learning approaches, significantly enhancing feasibility and practicality for edge deployment.

Technology Category

Application Category

📝 Abstract
WiFi-based human activity recognition (HAR) holds significant application potential across various fields. To handle dynamic environments where new activities are continuously introduced, WiFi-based HAR systems must adapt by learning new concepts without forgetting previously learned ones. Furthermore, retaining knowledge from old activities by storing historical exemplar is impractical for WiFi-based HAR due to privacy concerns and limited storage capacity of edge devices. In this work, we propose ConSense, a lightweight and fast-adapted exemplar-free class incremental learning framework for WiFi-based HAR. The framework leverages the transformer architecture and involves dynamic model expansion and selective retraining to preserve previously learned knowledge while integrating new information. Specifically, during incremental sessions, small-scale trainable parameters that are trained specifically on the data of each task are added in the multi-head self-attention layer. In addition, a selective retraining strategy that dynamically adjusts the weights in multilayer perceptron based on the performance stability of neurons across tasks is used. Rather than training the entire model, the proposed strategies of dynamic model expansion and selective retraining reduce the overall computational load while balancing stability on previous tasks and plasticity on new tasks. Evaluation results on three public WiFi datasets demonstrate that ConSense not only outperforms several competitive approaches but also requires fewer parameters, highlighting its practical utility in class-incremental scenarios for HAR.
Problem

Research questions and friction points this paper is trying to address.

Adapt WiFi-based HAR to new activities
Retain knowledge without storing historical data
Reduce computational load with selective retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer architecture integration
Dynamic model expansion strategy
Selective retraining approach
Rong Li
Rong Li
PhD student, HKUST (GZ)
Computer VisionEmbodied AI
T
Tao Deng
School of Computer Science and Technology, Soochow University, China
S
Siwei Feng
School of Computer Science and Technology, Soochow University, China
Mingjie Sun
Mingjie Sun
Thinking Machines Lab
Juncheng Jia
Juncheng Jia
Soochow University
Edge IntelligenceFederated LearningInternet of ThingsMobile Computing