MAQuA: Adaptive Question-Asking for Multidimensional Mental Health Screening using Item Response Theory

📅 2025-08-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high user burden, low efficiency, and difficulty in multi-dimensional assessment inherent in conventional mental health screening, this paper proposes the first adaptive, multi-dimensional psychological screening framework integrating large language models (LLMs) with Item Response Theory (IRT). Methodologically, it innovatively unifies multi-output language modeling with factor analysis to enable simultaneous, dynamic assessment across diagnostic dimensions—including depression, anxiety, eating disorders, and substance use disorders. It further introduces an information-theoretic question selection strategy and early termination mechanism to minimize response burden. Experimental evaluation on a newly curated dataset demonstrates that, compared to random item administration, the framework reduces total item count by 50%–87%: stable depression assessment requires only 29% of the original items, while eating disorder screening achieves reliable results with merely 15%. The approach thus achieves a favorable trade-off among diagnostic accuracy, administrative efficiency, and scalability.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models (LLMs) offer new opportunities for scalable, interactive mental health assessment, but excessive querying by LLMs burdens users and is inefficient for real-world screening across transdiagnostic symptom profiles. We introduce MAQuA, an adaptive question-asking framework for simultaneous, multidimensional mental health screening. Combining multi-outcome modeling on language responses with item response theory (IRT) and factor analysis, MAQuA selects the questions with most informative responses across multiple dimensions at each turn to optimize diagnostic information, improving accuracy and potentially reducing response burden. Empirical results on a novel dataset reveal that MAQuA reduces the number of assessment questions required for score stabilization by 50-87% compared to random ordering (e.g., achieving stable depression scores with 71% fewer questions and eating disorder scores with 85% fewer questions). MAQuA demonstrates robust performance across both internalizing (depression, anxiety) and externalizing (substance use, eating disorder) domains, with early stopping strategies further reducing patient time and burden. These findings position MAQuA as a powerful and efficient tool for scalable, nuanced, and interactive mental health screening, advancing the integration of LLM-based agents into real-world clinical workflows.
Problem

Research questions and friction points this paper is trying to address.

Reducing user burden in mental health screening via adaptive questioning
Optimizing diagnostic accuracy across multiple symptom dimensions simultaneously
Integrating LLMs with psychometrics for efficient clinical assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines multi-outcome modeling with IRT
Adaptively selects most informative questions
Reduces assessment questions by 50-87%
🔎 Similar Papers
No similar papers found.
Vasudha Varadarajan
Vasudha Varadarajan
Carnegie Mellon University
natural language processingcomputational social science
H
Hui Xu
Department of Electrical and Computer Engineering, Stony Brook University
R
Rebecca Astrid Boehme
Centre of Functionally Integrative Neuroscience, Aarhus University, Denmark
M
Mariam Marlan Mirström
Department of Psychology, Lund University
Sverker Sikström
Sverker Sikström
Department of Psychology, Lund University
H. Andrew Schwartz
H. Andrew Schwartz
Computer Science & Psychology, Stony Brook University
natural language processinghuman centered AIcomputational psychologyhealth informatics