{Psi}-Arena: Interactive Assessment and Optimization of LLM-based Psychological Counselors with Tripartite Feedback

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods for LLM-based counseling agents suffer from static design, single-perspective assessment, and non-actionable feedback. Method: We propose an interactive evaluation and optimization framework featuring (1) multi-stage NPC dialogues grounded in psychological profiling to simulate authentic counseling scenarios; (2) a novel tripartite collaborative evaluation mechanism involving users, AI, and human experts; and (3) a diagnosis-driven, closed-loop reflective optimization paradigm integrating reflection-based RLHF and structured feedback generation. Contribution/Results: Evaluated on eight mainstream LLMs, our framework reveals significant inter-model capability disparities; reflective optimization improves counseling quality by up to 141%. We release the first reproducible, extensible benchmark platform for mental health LLMs—advancing LLM-powered counseling toward safety, trustworthiness, and human-centered alignment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown promise in providing scalable mental health support, while evaluating their counseling capability remains crucial to ensure both efficacy and safety. Existing evaluations are limited by the static assessment that focuses on knowledge tests, the single perspective that centers on user experience, and the open-loop framework that lacks actionable feedback. To address these issues, we propose {Psi}-Arena, an interactive framework for comprehensive assessment and optimization of LLM-based counselors, featuring three key characteristics: (1) Realistic arena interactions that simulate real-world counseling through multi-stage dialogues with psychologically profiled NPC clients, (2) Tripartite evaluation that integrates assessments from the client, counselor, and supervisor perspectives, and (3) Closed-loop optimization that iteratively improves LLM counselors using diagnostic feedback. Experiments across eight state-of-the-art LLMs show significant performance variations in different real-world scenarios and evaluation perspectives. Moreover, reflection-based optimization results in up to a 141% improvement in counseling performance. We hope PsychoArena provides a foundational resource for advancing reliable and human-aligned LLM applications in mental healthcare.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM counseling efficacy and safety effectively
Overcoming static, single-perspective, open-loop assessment limitations
Optimizing LLM counselors via interactive tripartite feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-stage dialogues with profiled NPC clients
Tripartite evaluation from client, counselor, supervisor
Closed-loop optimization with diagnostic feedback
🔎 Similar Papers
No similar papers found.
S
Shijing Zhu
Central South University
Zhuang Chen
Zhuang Chen
中南大学计算机学院
Natural Language ProcessingSocial IntelligenceComputational Psychology
Guanqun Bi
Guanqun Bi
Tsinghua University; UCAS
Social AgentsNatural Language Generation
B
Binghang Li
Lingxin AI
Y
Yaxi Deng
Central South University
D
Dazhen Wan
Lingxin AI
L
Libiao Peng
Lingxin AI
X
Xiyao Xiao
Lingxin AI
Rongsheng Zhang
Rongsheng Zhang
Fuxi AI Lab, NetEase Inc., Hangzhou, China
NLP
Tangjie Lv
Tangjie Lv
netease
reinforcement learning
Z
Zhipeng Hu
Fuxi AI Lab, NetEase Inc.
F
FangFang Li
Central South University
M
Minlie Huang
CoAI Group, DCST, IAI, BNRIST, Tsinghua University