Toward Corpus Size Requirements for Training and Evaluating Depression Risk Models Using Spoken Language

📅 2022-09-18
🏛️ Interspeech
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the minimum effective dataset size required for stable and reliable speech-based depression risk prediction. Leveraging over 65,000 annotated speech samples, we conduct exhaustive cross-training/testing experiments to systematically evaluate how corpus size affects model performance across linguistic (BERT) and acoustic (eGeMAPS, wav2vec 2.0) modalities. Our key contribution is the first quantitative identification of empirical thresholds: a training set of at least 2,000 samples and a test set of at least 1,000 samples are necessary to ensure model stability; smaller test sets (<1,000) induce high variance, while training sets below 2,000 yield significant performance fluctuations. This pattern holds consistently across modalities and generalizes robustly across age groups. The findings establish reproducible, evidence-based data-sizing guidelines for speech-based mental health assessment.

Technology Category

Application Category

📝 Abstract
Mental health risk prediction is a growing field in the speech community, but many studies are based on small corpora. This study illustrates how variations in test and train set sizes impact performance in a controlled study. Using a corpus of over 65K labeled data points, results from a fully crossed design of different train/test size combinations are provided. Two model types are included: one based on language and the other on speech acoustics. Both use methods current in this domain. An age-mismatched test set was also included. Results show that (1) test sizes below 1K samples gave noisy results, even for larger training set sizes; (2) training set sizes of at least 2K were needed for stable results; (3) NLP and acoustic models behaved similarly with train/test size variations, and (4) the mismatched test set showed the same patterns as the matched test set. Additional factors are discussed, including label priors, model strength and pre-training, unique speakers, and data lengths. While no single study can specify exact size requirements, results demonstrate the need for appropriately sized train and test sets for future studies of mental health risk prediction from speech and language.
Problem

Research questions and friction points this paper is trying to address.

Depression Risk Model
Speech Data
Minimum Effective Amount
Innovation

Methods, ideas, or system contributions that make the work stand out.

Depression Risk Prediction
Data Quantity Impact
Model Stability
🔎 Similar Papers
No similar papers found.
T
T. Rutowski
Ellipsis Health
A
A. Harati
Ellipsis Health
Elizabeth Shriberg
Elizabeth Shriberg
Chief Science Officer, Ellipsis Health
conversational AIspeech technologyspeaker state detectionhealthcareaffective computing
Y
Yang Lu
Ellipsis Health
P
P. Chlebek
Ellipsis Health
R
R. Oliveira
Ellipsis Health