🤖 AI Summary
This study addresses the challenges of analyzing weak and transient bowel sound signals, which are difficult to objectively quantify due to the high subjectivity of traditional manual auscultation. The authors propose an automated bowel sound analysis pipeline based on a wearable SonicGuard sensor, integrating energy-threshold-based event detection with an Audio Spectrogram Transformer (AST) model—the first application of AST to bowel sound classification. To enhance generalization, a dual-model strategy is employed, separately trained on data from healthy individuals and patients. Evaluated on an expert-annotated dataset, the approach achieves classification accuracies of 97% and 96% for healthy and patient groups, respectively, with AUROC scores of 0.98 for both. Furthermore, the automated annotation pipeline reduces manual labeling time by 70%, requiring expert correction for fewer than 12% of audio segments.
📝 Abstract
Bowel sounds (BS) are typically momentary and have low amplitude, making them difficult to detect accurately through manual auscultation. This leads to significant variability in clinical assessment. Digital acoustic sensors allow the acquisition of high-quality BS and enable automated signal analysis, offering the potential to provide clinicians with both objective and quantitative feedback on bowel activity. This study presents an automated pipeline for bowel sound segmentation and classification using a wearable acoustic SonicGuard sensor. BS signals from 83 subjects were recorded using a SonicGuard sensor. Data from 40 subjects were manually annotated by clinical experts and used to train an automatic annotation algorithm, while the remaining subjects were used for further model evaluation. An energy-based event detection algorithm was developed to detect BS events. Detected sound segments were then classified into BS patterns using a pretrained Audio Spectrogram Transformer (AST) model. Model performance was evaluated separately for healthy individuals and patients. The best configuration used two specialized models, one trained on healthy subjects and one on patients, achieving (accuracy: 0.97, AUROC: 0.98) for healthy group and (accuracy: 0.96, AUROC: 0.98) for patient group. The auto-annotation method reduced manual labeling time by approximately 70%, and expert review showed that less than 12% of automatically detected segments required correction. The proposed automated segmentation and classification system enables quantitative assessment of bowel activity, providing clinicians with an objective diagnostic tool that may improve the diagnostic of gastrointestinal function and support the annotation of large-scale datasets.