LLM-as-a-Supervisor: Mistaken Therapeutic Behaviors Trigger Targeted Supervisory Feedback

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clinical psychology training lacks objective, actionable assessment criteria—despite the absence of an absolute “gold standard,” trainees require reliable, controllable feedback. Method: This paper proposes an LLM-driven, error-oriented supervision paradigm comprising: (1) a clinical-guideline-finetuned LLM to detect domain-specific therapeutic errors in dialogue; (2) a human-AI collaborative framework for curating high-quality dialogue–feedback pairs; and (3) a quantifiable mapping schema linking error triggers to corresponding corrective feedback. Contribution/Results: Empirical evaluation demonstrates significant improvements over baselines across three dimensions: automated error assessment accuracy, expert blind evaluation scores, and downstream pedagogical efficacy. The system achieves, for the first time, standardized, scalable, and interpretable supervision grounded in clinically meaningful error patterns—establishing a novel AI-augmented pathway for evidence-based psychotherapy training.

Technology Category

Application Category

📝 Abstract
Although large language models (LLMs) hold significant promise in psychotherapy, their direct application in patient-facing scenarios raises ethical and safety concerns. Therefore, this work shifts towards developing an LLM as a supervisor to train real therapists. In addition to the privacy of clinical therapist training data, a fundamental contradiction complicates the training of therapeutic behaviors: clear feedback standards are necessary to ensure a controlled training system, yet there is no absolute "gold standard" for appropriate therapeutic behaviors in practice. In contrast, many common therapeutic mistakes are universal and identifiable, making them effective triggers for targeted feedback that can serve as clearer evidence. Motivated by this, we create a novel therapist-training paradigm: (1) guidelines for mistaken behaviors and targeted correction strategies are first established as standards; (2) a human-in-the-loop dialogue-feedback dataset is then constructed, where a mistake-prone agent intentionally makes standard mistakes during interviews naturally, and a supervisor agent locates and identifies mistakes and provides targeted feedback; (3) after fine-tuning on this dataset, the final supervisor model is provided for real therapist training. The detailed experimental results of automated, human and downstream assessments demonstrate that models fine-tuned on our dataset MATE, can provide high-quality feedback according to the clinical guideline, showing significant potential for the therapist training scenario.
Problem

Research questions and friction points this paper is trying to address.

Ensuring ethical and safe LLM use in psychotherapy supervision
Addressing lack of gold standards for therapeutic behavior feedback
Developing targeted feedback for common therapist mistakes
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM supervises therapists via targeted feedback
Human-in-the-loop dataset trains mistake correction
MATE dataset enables high-quality clinical feedback
🔎 Similar Papers
No similar papers found.
C
Chen Xu
Key Laboratory of Brain Health Intelligent Evaluation and Intervention, Ministry of Education (Bejing Institute of Technology), School of Medical Technology, Bejing Institute of Technology
Zhenyu Lv
Zhenyu Lv
Beijing Insititute of Technology
AILLMAIGC
T
Tian Lan
School of Computer Science and Technology, Bejing Institute of Technology
X
Xianyang Wang
Key Laboratory of Brain Health Intelligent Evaluation and Intervention, Ministry of Education (Bejing Institute of Technology), School of Medical Technology, Bejing Institute of Technology
L
Luyao Ji
Seventh Medical Center, Chinese People’s Liberation Army General Hospital
Leyang Cui
Leyang Cui
Tencent AI Lab
Natural Language Processing
M
Minqiang Yang
School of Information Science and Engineering, Lanzhou University
J
Jian Shen
Key Laboratory of Brain Health Intelligent Evaluation and Intervention, Ministry of Education (Bejing Institute of Technology), School of Medical Technology, Bejing Institute of Technology
Qunxi Dong
Qunxi Dong
Scholar of BIT
Computational Neuroscience
X
Xiuling Liu
School of Electronics and Information Engineering, Hebei University
J
Juan Wang
Seventh Medical Center, Chinese People’s Liberation Army General Hospital
B
Bin Hu
Key Laboratory of Brain Health Intelligent Evaluation and Intervention, Ministry of Education (Bejing Institute of Technology), School of Medical Technology, Bejing Institute of Technology, School of Information Science and Engineering, Lanzhou University