Towards Objective Gastrointestinal Auscultation: Automated Segmentation and Annotation of Bowel Sound Patterns

📅 2026-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenges of analyzing weak and transient bowel sound signals, which are difficult to objectively quantify due to the high subjectivity of traditional manual auscultation. The authors propose an automated bowel sound analysis pipeline based on a wearable SonicGuard sensor, integrating energy-threshold-based event detection with an Audio Spectrogram Transformer (AST) model—the first application of AST to bowel sound classification. To enhance generalization, a dual-model strategy is employed, separately trained on data from healthy individuals and patients. Evaluated on an expert-annotated dataset, the approach achieves classification accuracies of 97% and 96% for healthy and patient groups, respectively, with AUROC scores of 0.98 for both. Furthermore, the automated annotation pipeline reduces manual labeling time by 70%, requiring expert correction for fewer than 12% of audio segments.

Technology Category

Application Category

📝 Abstract
Bowel sounds (BS) are typically momentary and have low amplitude, making them difficult to detect accurately through manual auscultation. This leads to significant variability in clinical assessment. Digital acoustic sensors allow the acquisition of high-quality BS and enable automated signal analysis, offering the potential to provide clinicians with both objective and quantitative feedback on bowel activity. This study presents an automated pipeline for bowel sound segmentation and classification using a wearable acoustic SonicGuard sensor. BS signals from 83 subjects were recorded using a SonicGuard sensor. Data from 40 subjects were manually annotated by clinical experts and used to train an automatic annotation algorithm, while the remaining subjects were used for further model evaluation. An energy-based event detection algorithm was developed to detect BS events. Detected sound segments were then classified into BS patterns using a pretrained Audio Spectrogram Transformer (AST) model. Model performance was evaluated separately for healthy individuals and patients. The best configuration used two specialized models, one trained on healthy subjects and one on patients, achieving (accuracy: 0.97, AUROC: 0.98) for healthy group and (accuracy: 0.96, AUROC: 0.98) for patient group. The auto-annotation method reduced manual labeling time by approximately 70%, and expert review showed that less than 12% of automatically detected segments required correction. The proposed automated segmentation and classification system enables quantitative assessment of bowel activity, providing clinicians with an objective diagnostic tool that may improve the diagnostic of gastrointestinal function and support the annotation of large-scale datasets.
Problem

Research questions and friction points this paper is trying to address.

bowel sounds
gastrointestinal auscultation
automated segmentation
objective assessment
clinical variability
Innovation

Methods, ideas, or system contributions that make the work stand out.

automated bowel sound segmentation
Audio Spectrogram Transformer
wearable acoustic sensor
objective gastrointestinal auscultation
energy-based event detection
🔎 Similar Papers
No similar papers found.
Z
Zahra Mansour
Division AI4Health, Department for Health Services Research, Faculty of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany; Fraunhofer IDMT, Institute Part HSA, 26129 Oldenburg, Germany
V
Verena Uslar
University Clinic for Visceral Surgery, Faculty of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, 26121 Oldenburg, Germany
D
Dirk Weyhe
University Clinic for Visceral Surgery, Faculty of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, 26121 Oldenburg, Germany
D
Danilo Hollosi
Fraunhofer IDMT, Institute Part HSA, 26129 Oldenburg, Germany
Nils Strodthoff
Nils Strodthoff
Professor for eHealth/AI4Health, Oldenburg University, Germany
Machine LearningDeep LearningBiomedical Data Analysis