StethoLM: Audio Language Model for Cardiopulmonary Analysis Across Clinical Tasks

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes StethoLM, the first instruction-driven audio-language model tailored for cardiopulmonary auscultation, addressing the limitations of traditional auscultation—which heavily relies on clinician experience—and existing deep learning approaches that are often confined to simplistic classification tasks with poor clinical interpretability and limited multitask capability. By integrating an audio encoder with a medical language model, StethoLM supports seven complex clinical tasks ranging from classification and reasoning to differential diagnosis. Leveraging a newly curated benchmark, StethoBench, comprising 77,027 instruction-response pairs, the model is trained via a multi-stage strategy involving supervised fine-tuning and direct preference optimization. This approach substantially enhances performance and robustness on out-of-distribution data, offering an end-to-end, interpretable, multitask AI decision support system for clinical auscultation.

Technology Category

Application Category

📝 Abstract
Listening to heart and lung sounds - auscultation - is one of the first and most fundamental steps in a clinical examination. Despite being fast and non-invasive, it demands years of experience to interpret subtle audio cues. Recent deep learning methods have made progress in automating cardiopulmonary sound analysis, yet most are restricted to simple classification and offer little clinical interpretability or decision support. We present StethoLM, the first audio-language model specialized for cardiopulmonary auscultation, capable of performing instruction-driven clinical tasks across the full spectrum of auscultation analysis. StethoLM integrates audio encoding with a medical language model backbone and is trained on StethoBench, a comprehensive benchmark comprising 77,027 instruction-response pairs synthesized from 16,125 labeled cardiopulmonary recordings spanning seven clinical task categories: binary classification, detection, reporting, reasoning, differential diagnosis, comparison, and location-based analysis. Through multi-stage training that combines supervised fine-tuning and direct preference optimization, StethoLM achieves substantial gains in performance and robustness on out-of-distribution data. Our work establishes a foundation for instruction-following AI systems in clinical auscultation.
Problem

Research questions and friction points this paper is trying to address.

cardiopulmonary auscultation
clinical interpretability
audio-language model
clinical decision support
heart and lung sounds
Innovation

Methods, ideas, or system contributions that make the work stand out.

audio-language model
cardiopulmonary auscultation
instruction-following AI
clinical interpretability
multi-task learning
🔎 Similar Papers
No similar papers found.