When Can We Trust LLMs in Mental Health? Large-Scale Benchmarks for Reliable LLM Evaluation

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluation benchmarks for mental health are limited in scale, reliability, and lack trustworthy automated assessment frameworks. Method: We introduce two high-quality benchmarks—MentalBench-100k and MentalAlign-70k—comprising real-world therapeutic dialogues and multi-model generated responses. We propose the Affective Cognitive Agreement Framework (ACAF), the first to employ intra-class correlation coefficients (ICC) and confidence intervals to quantify agreement, bias, and stability between LLM-based raters and human experts across cognitive (e.g., directive, informative) and affective (e.g., empathic, safe) dimensions. Results: Experiments show LLM raters achieve high reliability on cognitive attributes (ICC > 0.8) but exhibit systematic score inflation and instability on affective attributes (ICC < 0.5). This work establishes new benchmark datasets, a principled evaluation framework, and empirical evidence for trustworthy LLM-based psychological support assessment.

Technology Category

Application Category

📝 Abstract
Evaluating Large Language Models (LLMs) for mental health support is challenging due to the emotionally and cognitively complex nature of therapeutic dialogue. Existing benchmarks are limited in scale, reliability, often relying on synthetic or social media data, and lack frameworks to assess when automated judges can be trusted. To address the need for large-scale dialogue datasets and judge reliability assessment, we introduce two benchmarks that provide a framework for generation and evaluation. MentalBench-100k consolidates 10,000 one-turn conversations from three real scenarios datasets, each paired with nine LLM-generated responses, yielding 100,000 response pairs. MentalAlign-70k}reframes evaluation by comparing four high-performing LLM judges with human experts across 70,000 ratings on seven attributes, grouped into Cognitive Support Score (CSS) and Affective Resonance Score (ARS). We then employ the Affective Cognitive Agreement Framework, a statistical methodology using intraclass correlation coefficients (ICC) with confidence intervals to quantify agreement, consistency, and bias between LLM judges and human experts. Our analysis reveals systematic inflation by LLM judges, strong reliability for cognitive attributes such as guidance and informativeness, reduced precision for empathy, and some unreliability in safety and relevance. Our contributions establish new methodological and empirical foundations for reliable, large-scale evaluation of LLMs in mental health. We release the benchmarks and codes at: https://github.com/abeerbadawi/MentalBench/
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM reliability in mental health support dialogues
Assessing systematic biases in automated mental health evaluations
Developing frameworks for trustworthy cognitive and affective scoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale benchmarks with real conversation datasets
Statistical framework measuring human-LLM judge agreement
Evaluation metrics combining cognitive and affective dimensions
🔎 Similar Papers
No similar papers found.