🤖 AI Summary
Clinical psychology training lacks objective, actionable assessment criteria—despite the absence of an absolute “gold standard,” trainees require reliable, controllable feedback. Method: This paper proposes an LLM-driven, error-oriented supervision paradigm comprising: (1) a clinical-guideline-finetuned LLM to detect domain-specific therapeutic errors in dialogue; (2) a human-AI collaborative framework for curating high-quality dialogue–feedback pairs; and (3) a quantifiable mapping schema linking error triggers to corresponding corrective feedback. Contribution/Results: Empirical evaluation demonstrates significant improvements over baselines across three dimensions: automated error assessment accuracy, expert blind evaluation scores, and downstream pedagogical efficacy. The system achieves, for the first time, standardized, scalable, and interpretable supervision grounded in clinically meaningful error patterns—establishing a novel AI-augmented pathway for evidence-based psychotherapy training.
📝 Abstract
Although large language models (LLMs) hold significant promise in psychotherapy, their direct application in patient-facing scenarios raises ethical and safety concerns. Therefore, this work shifts towards developing an LLM as a supervisor to train real therapists. In addition to the privacy of clinical therapist training data, a fundamental contradiction complicates the training of therapeutic behaviors: clear feedback standards are necessary to ensure a controlled training system, yet there is no absolute "gold standard" for appropriate therapeutic behaviors in practice. In contrast, many common therapeutic mistakes are universal and identifiable, making them effective triggers for targeted feedback that can serve as clearer evidence. Motivated by this, we create a novel therapist-training paradigm: (1) guidelines for mistaken behaviors and targeted correction strategies are first established as standards; (2) a human-in-the-loop dialogue-feedback dataset is then constructed, where a mistake-prone agent intentionally makes standard mistakes during interviews naturally, and a supervisor agent locates and identifies mistakes and provides targeted feedback; (3) after fine-tuning on this dataset, the final supervisor model is provided for real therapist training. The detailed experimental results of automated, human and downstream assessments demonstrate that models fine-tuned on our dataset MATE, can provide high-quality feedback according to the clinical guideline, showing significant potential for the therapist training scenario.