🤖 AI Summary
This work addresses the scarcity of training data and evaluation benchmarks for long-context audio reasoning, which hinders open-ended long-form audio generation and summarization. The authors propose the first end-to-end, open-source framework that synthesizes triadic medical consultations—comprising patient–clinician dialogues, multi-speaker audio, and structured clinical notes. The pipeline employs a role-playing large language model to generate initial-visit dialogues, which are then rendered into realistic multi-speaker speech incorporating overlapping utterances, pauses, room acoustics, and ambient noise, followed by automatic generation of SOAP-format clinical summaries. The project releases 8,800 synthetic dialogues (totaling 1,300 hours of audio) with corresponding reference summaries, filling a critical gap in medical long-audio datasets. Evaluations demonstrate that a cascaded approach significantly outperforms end-to-end models.
📝 Abstract
Long-context audio reasoning is underserved in both training data and evaluation. Existing benchmarks target short-context tasks, and the open-ended generation tasks most relevant to long-context reasoning pose well-known challenges for automatic evaluation. We propose a synthetic data generation pipeline designed to serve both as a training resource and as a controlled evaluation environment, and instantiate it for first-visit doctor-patient conversations with SOAP note generation as the task. The pipeline has three stages, persona-driven dialogue generation, multi-speaker audio synthesis with overlap/pause modeling, room acoustics, and sound events, and LLM-based reference SOAP note production, built entirely on open-weight models. We release 8,800 synthetic conversations with 1.3k hours of corresponding audio and reference notes. Evaluating current open-weight systems, we find that cascaded approaches still substantially outperform end-to-end models.