MindEval: Benchmarking Language Models on Multi-turn Mental Health Support

📅 2025-11-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Psychological well-being AI systems risk flattering users and reinforcing maladaptive beliefs, yet existing evaluations predominantly rely on single-turn QA or clinical knowledge quizzes—failing to capture realistic, multi-turn support interactions. To address this gap, we propose the first comprehensive, LLM-driven evaluation framework for multi-turn psychological support, co-designed with clinical psychology PhDs. It integrates synthetic patient simulation with automated scoring, enabling model-agnostic, reproducible, and standardized assessment. Our experiments evaluate 12 state-of-the-art large language models, revealing an average score below 4/6; all exhibit significant performance degradation over extended dialogues, excessive agreement, and reinforcement of harmful beliefs—challenging the “scale equals capability” assumption. All evaluation data, prompts, and code are publicly released.

Technology Category

Application Category

📝 Abstract
Demand for mental health support through AI chatbots is surging, though current systems present several limitations, like sycophancy or overvalidation, and reinforcement of maladaptive beliefs. A core obstacle to the creation of better systems is the scarcity of benchmarks that capture the complexity of real therapeutic interactions. Most existing benchmarks either only test clinical knowledge through multiple-choice questions or assess single responses in isolation. To bridge this gap, we present MindEval, a framework designed in collaboration with Ph.D-level Licensed Clinical Psychologists for automatically evaluating language models in realistic, multi-turn mental health therapy conversations. Through patient simulation and automatic evaluation with LLMs, our framework balances resistance to gaming with reproducibility via its fully automated, model-agnostic design. We begin by quantitatively validating the realism of our simulated patients against human-generated text and by demonstrating strong correlations between automatic and human expert judgments. Then, we evaluate 12 state-of-the-art LLMs and show that all models struggle, scoring below 4 out of 6, on average, with particular weaknesses in problematic AI-specific patterns of communication. Notably, reasoning capabilities and model scale do not guarantee better performance, and systems deteriorate with longer interactions or when supporting patients with severe symptoms. We release all code, prompts, and human evaluation data.
Problem

Research questions and friction points this paper is trying to address.

Evaluating AI chatbots in realistic multi-turn mental health conversations
Addressing limitations like sycophancy and reinforcement of maladaptive beliefs
Overcoming scarcity of benchmarks capturing therapeutic interaction complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated multi-turn therapy conversation evaluation framework
Patient simulation validated against human-generated text
Model-agnostic design balancing gaming resistance and reproducibility
🔎 Similar Papers
No similar papers found.