Respiratory Inhaler Sound Event Classification Using Self-Supervised Learning

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the core challenge of poor cross-device generalization in acoustic classification models for monitoring inhaler adherence in asthma patients. We propose a lightweight acoustic event classification method that leverages smartwatch-collected audio from dry-powder inhalers (DPIs). Innovatively, we adopt the wav2vec 2.0 self-supervised pretraining framework and design a fine-tuning paradigm tailored for cross-device domain adaptation—requiring only a minimal amount of labeled data from the target device (smartwatch). To our knowledge, this is the first systematic validation of smartwatches as viable, portable endpoints for inhalation behavior monitoring. Evaluated on a real-world DPI–smartwatch dataset, our method achieves 98% balanced accuracy—substantially outperforming conventional supervised approaches—and demonstrates markedly improved generalization across devices. The approach provides a scalable, low-resource technical pathway for digital adherence monitoring in chronic respiratory diseases across heterogeneous device ecosystems.

Technology Category

Application Category

📝 Abstract
Asthma is a chronic respiratory condition that affects millions of people worldwide. While this condition can be managed by administering controller medications through handheld inhalers, clinical studies have shown low adherence to the correct inhaler usage technique. Consequently, many patients may not receive the full benefit of their medication. Automated classification of inhaler sounds has recently been studied to assess medication adherence. However, the existing classification models were typically trained using data from specific inhaler types, and their ability to generalize to sounds from different inhalers remains unexplored. In this study, we adapted the wav2vec 2.0 self-supervised learning model for inhaler sound classification by pre-training and fine-tuning this model on inhaler sounds. The proposed model shows a balanced accuracy of 98% on a dataset collected using a dry powder inhaler and smartwatch device. The results also demonstrate that re-finetuning this model on minimal data from a target inhaler is a promising approach to adapting a generic inhaler sound classification model to a different inhaler device and audio capture hardware. This is the first study in the field to demonstrate the potential of smartwatches as assistive technologies for the personalized monitoring of inhaler adherence using machine learning models.
Problem

Research questions and friction points this paper is trying to address.

Classifying inhaler sounds to assess medication adherence
Generalizing models across different inhaler types and sounds
Using smartwatches for personalized inhaler monitoring via machine learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised learning for inhaler sound classification
Wav2vec 2.0 model adapted with pre-training
Smartwatch-based personalized adherence monitoring
🔎 Similar Papers
No similar papers found.
Davoud Shariat Panah
Davoud Shariat Panah
University College Dublin
Audio ProcessingMachine LearningDeep Learning
A
Alessandro N Franciosi
School of Medicine, University College Dublin, Ireland, and Department of Respiratory Medicine, St. Vincent’s University Hospital, Dublin, Ireland
C
Cormac McCarthy
School of Medicine, University College Dublin, Ireland, and Department of Respiratory Medicine, St. Vincent’s University Hospital, Dublin, Ireland
Andrew Hines
Andrew Hines
University College Dublin
Machine PerceptionSpeech PerceptionAudio/ MultimodalMachine LearningQuality of Experience