🤖 AI Summary
Existing evaluation methods for LLM-based counseling agents suffer from static design, single-perspective assessment, and non-actionable feedback. Method: We propose an interactive evaluation and optimization framework featuring (1) multi-stage NPC dialogues grounded in psychological profiling to simulate authentic counseling scenarios; (2) a novel tripartite collaborative evaluation mechanism involving users, AI, and human experts; and (3) a diagnosis-driven, closed-loop reflective optimization paradigm integrating reflection-based RLHF and structured feedback generation. Contribution/Results: Evaluated on eight mainstream LLMs, our framework reveals significant inter-model capability disparities; reflective optimization improves counseling quality by up to 141%. We release the first reproducible, extensible benchmark platform for mental health LLMs—advancing LLM-powered counseling toward safety, trustworthiness, and human-centered alignment.
📝 Abstract
Large language models (LLMs) have shown promise in providing scalable mental health support, while evaluating their counseling capability remains crucial to ensure both efficacy and safety. Existing evaluations are limited by the static assessment that focuses on knowledge tests, the single perspective that centers on user experience, and the open-loop framework that lacks actionable feedback. To address these issues, we propose {Psi}-Arena, an interactive framework for comprehensive assessment and optimization of LLM-based counselors, featuring three key characteristics: (1) Realistic arena interactions that simulate real-world counseling through multi-stage dialogues with psychologically profiled NPC clients, (2) Tripartite evaluation that integrates assessments from the client, counselor, and supervisor perspectives, and (3) Closed-loop optimization that iteratively improves LLM counselors using diagnostic feedback. Experiments across eight state-of-the-art LLMs show significant performance variations in different real-world scenarios and evaluation perspectives. Moreover, reflection-based optimization results in up to a 141% improvement in counseling performance. We hope PsychoArena provides a foundational resource for advancing reliable and human-aligned LLM applications in mental healthcare.