PsychePass: Calibrating LLM Therapeutic Competence via Trajectory-Anchored Tournaments

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods struggle to effectively assess the therapeutic capabilities of large language models, often compromised by process drift and criterion drift. To address this, this work proposes a framework integrating trajectory-anchored client simulation with a Swiss-system dynamic tournament, coupled with an Elo rating mechanism to transform interaction trajectories into stable, comparable reward signals for policy gradient reinforcement learning. This approach enables unified calibration and efficient optimization of models’ therapeutic skills. Experimental results demonstrate that the framework’s evaluations align closely with human expert judgments and significantly enhance model performance in psychotherapeutic tasks.

Technology Category

Application Category

📝 Abstract
While large language models show promise in mental healthcare, evaluating their therapeutic competence remains challenging due to the unstructured and longitudinal nature of counseling. We argue that current evaluation paradigms suffer from an unanchored defect, leading to two forms of instability: process drift, where unsteered client simulation wanders away from specific counseling goals, and standard drift, where static pointwise scoring lacks the stability for reliable judgment. To address this, we introduce Ps, a unified framework that calibrates the therapeutic competence of LLMs via trajectory-anchored tournaments. We first anchor the interaction trajectory in simulation, where clients precisely control the fluid consultation process to probe multifaceted capabilities. We then anchor the battle trajectory in judgments through an efficient Swiss-system tournament, utilizing dynamic pairwise battles to yield robust Elo ratings. Beyond ranking, we demonstrate that tournament trajectories can be transformed into credible reward signals, enabling on-policy reinforcement learning to enhance LLMs'performance. Extensive experiments validate the effectiveness of PsychePass and its strong consistency with human expert judgments.
Problem

Research questions and friction points this paper is trying to address.

therapeutic competence
evaluation
large language models
counseling
mental healthcare
Innovation

Methods, ideas, or system contributions that make the work stand out.

trajectory-anchored tournaments
therapeutic competence calibration
Swiss-system tournament
on-policy reinforcement learning
LLM evaluation
🔎 Similar Papers
No similar papers found.