Foundation Model Hidden Representations for Heart Rate Estimation from Auscultation

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the efficacy of latent representations from pretrained audio foundation models—including HuBERT, wav2vec 2.0, wavLM, Whisper, and CLAP—for heart rate estimation from phonocardiogram (PCG) signals. We propose the first hierarchical, multi-model interpretability analysis framework tailored to auscultation tasks, systematically evaluating discriminative capacity across model layers via layer-wise feature extraction and regression modeling on PCG datasets. Results demonstrate that our customized CLAP audio encoder significantly outperforms conventional acoustic features and the Nie et al. (2024) baseline in cross-domain settings. Its optimal-layer representation achieves the lowest mean absolute error (MAE) for heart rate estimation and exhibits robustness across diverse data splits. This work establishes a novel paradigm for adapting and interpreting audio foundation models in clinical auscultation, providing empirical evidence for their task-specific efficacy and clinical translatability.

Technology Category

Application Category

📝 Abstract
Auscultation, particularly heart sound, is a non-invasive technique that provides essential vital sign information. Recently, self-supervised acoustic representation foundation models (FMs) have been proposed to offer insights into acoustics-based vital signs. However, there has been little exploration of the extent to which auscultation is encoded in these pre-trained FM representations. In this work, using a publicly available phonocardiogram (PCG) dataset and a heart rate (HR) estimation model, we conduct a layer-wise investigation of six acoustic representation FMs: HuBERT, wav2vec2, wavLM, Whisper, Contrastive Language-Audio Pretraining (CLAP), and an in-house CLAP model. Additionally, we implement the baseline method from Nie et al., 2024 (which relies on acoustic features) and show that overall, representation vectors from pre-trained foundation models (FMs) offer comparable performance to the baseline. Notably, HR estimation using the representations from the audio encoder of the in-house CLAP model outperforms the results obtained from the baseline, achieving a lower mean absolute error (MAE) across various train/validation/test splits despite the domain mismatch.
Problem

Research questions and friction points this paper is trying to address.

Exploring FM representations for heart rate estimation
Comparing six acoustic FMs with baseline methods
Evaluating CLAP model's superior HR estimation performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise analysis of six acoustic FMs
In-house CLAP model outperforms baseline
Uses pre-trained FM representations for HR
🔎 Similar Papers
No similar papers found.