Fine-Tuning Large Audio-Language Models with LoRA for Precise Temporal Localization of Prolonged Exposure Therapy Elements

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high time cost and low accuracy of manual temporal annotation of critical phases—orientation, imaginal exposure, and post-exposure processing—in Prolonged Exposure (PE) therapy sessions. We propose a millisecond-level multimodal temporal localization method leveraging large language and audio models. Specifically, we pioneer the adaptation of Qwen2-Audio via LoRA for clinical dialogue timing annotation, integrating 30-second sliding-window audio-text alignment, task-prompt-guided soft-supervised boundary regression, and an LLM-assisted + human-verification collaborative annotation paradigm. Evaluated on 313 real-world PE sessions, our method achieves a mean absolute error of 5.3 seconds. Ablation studies confirm that window granularity and LoRA rank significantly impact temporal precision. This work establishes a scalable, high-accuracy technical framework for automated analysis of psychotherapy process dynamics.

Technology Category

Application Category

📝 Abstract
Prolonged Exposure (PE) therapy is an effective treatment for post-traumatic stress disorder (PTSD), but evaluating therapist fidelity remains labor-intensive due to the need for manual review of session recordings. We present a method for the automatic temporal localization of key PE fidelity elements -- identifying their start and stop times -- directly from session audio and transcripts. Our approach fine-tunes a large pre-trained audio-language model, Qwen2-Audio, using Low-Rank Adaptation (LoRA) to process focused 30-second windows of audio-transcript input. Fidelity labels for three core protocol phases -- therapist orientation (P1), imaginal exposure (P2), and post-imaginal processing (P3) -- are generated via LLM-based prompting and verified by trained raters. The model is trained to predict normalized boundary offsets using soft supervision guided by task-specific prompts. On a dataset of 313 real PE sessions, our best configuration (LoRA rank 8, 30s windows) achieves a mean absolute error (MAE) of 5.3 seconds across tasks. We further analyze the effects of window size and LoRA rank, highlighting the importance of context granularity and model adaptation. This work introduces a scalable framework for fidelity tracking in PE therapy, with potential to support clinician training, supervision, and quality assurance.
Problem

Research questions and friction points this paper is trying to address.

Automatically localize key PE therapy elements in audio
Fine-tune audio-language model for precise temporal localization
Reduce manual review of PTSD therapy session recordings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes Qwen2-Audio with LoRA
Processes 30-second audio-transcript windows
Uses LLM-based prompting for fidelity labels
🔎 Similar Papers
No similar papers found.
B
BN Suhas
College of Information Sciences & Technology, The Pennsylvania State University, University Park, USA
A
Andrew M. Sherrill
Deptartment of Psychiatry & Behavioral Sciences, Emory University, Atlanta, USA
J
Jyoti Alaparthi
Deptartment of Psychiatry & Behavioral Sciences, Emory University, Atlanta, USA
D
Dominik Mattioli
College of Information Sciences & Technology, The Pennsylvania State University, University Park, USA
Rosa I. Arriaga
Rosa I. Arriaga
Associate Professor, Interactive Computing, Georgia Tech
hcimHealthsocial computingcognitive science
C
Chris W. Wiese
School of Psychology, Georgia Institute of Technology, Atlanta, USA
Saeed Abdullah
Saeed Abdullah
Penn State
HCIDigital HealthmHealthHCAI