🤖 AI Summary
This work addresses the limitations of subjective interpretation in auscultation recordings and the inability of general-purpose audio language models to capture subtle physiological signal characteristics. To overcome these challenges, the authors propose a lightweight, domain-specific encoder that integrates a multi-site auscultation signal aggregation strategy with a gated cross-attention mechanism. This approach aligns multi-channel auscultatory features with the embedding space of a frozen large language model, leveraging its broad world knowledge for holistic patient-level assessment. The method mitigates temporal truncation issues without requiring extensive retraining and achieves state-of-the-art performance on the CaReSound benchmark, yielding an F1-macro score of 0.865 and a BERTScore of 0.952.
📝 Abstract
Auscultation is a vital diagnostic tool, yet its utility is often limited by subjective interpretation. While general-purpose Audio-Language Models (ALMs) excel in general domains, they struggle with the nuances of physiological signals. We propose a framework that aligns multi-site auscultation recordings directly with a frozen Large Language Model (LLM) embedding space via gated cross-attention. By leveraging the LLM's latent world knowledge, our approach moves beyond isolated classification toward holistic, patient-level assessment. On the CaReSound benchmark, our model achieves a state-of-the-art 0.865 F1-macro and 0.952 BERTScore. We demonstrate that lightweight, domain-specific encoders rival large-scale ALMs and that multi-site aggregation provides spatial redundancy that mitigates temporal truncation. This alignment of medical acoustics with text foundations offers a scalable path for bridging signal processing and clinical assessment.