🤖 AI Summary
Sleep foundation models face dual bottlenecks—lack of a unified benchmark and systematic evaluation of self-supervised representation learning methods. Method: We introduce the first large-scale, multi-task polysomnography (PSG) benchmark comprising 17,467 studies (>163,000 hours), covering sleep staging, apnea diagnosis, age estimation, and prediction of 13 diseases and all-cause mortality. We establish the first standardized multi-task PSG evaluation framework to systematically compare diverse self-supervised paradigms—including contrastive, masked autoencoding, and predictive modeling approaches. Contribution/Results: While performance across conventional tasks (e.g., staging) remains comparable, contrastive learning achieves significant AUC improvements of 3.2–5.8% on disease/mortality prediction and accelerates pretraining convergence by 40%. This benchmark and empirical analysis advance reproducibility, clinical generalizability, and cross-task representation transfer in sleep foundation modeling.
📝 Abstract
Polysomnography (PSG), the gold standard test for sleep analysis, generates vast amounts of multimodal clinical data, presenting an opportunity to leverage self-supervised representation learning (SSRL) for pre-training foundation models to enhance sleep analysis. However, progress in sleep foundation models is hindered by two key limitations: (1) the lack of a shared dataset and benchmark with diverse tasks for training and evaluation, and (2) the absence of a systematic evaluation of SSRL approaches across sleep-related tasks. To address these gaps, we introduce Stanford Sleep Bench, a large-scale PSG dataset comprising 17,467 recordings totaling over 163,000 hours from a major sleep clinic, including 13 clinical disease prediction tasks alongside canonical sleep-related tasks such as sleep staging, apnea diagnosis, and age estimation. We systematically evaluate SSRL pre-training methods on Stanford Sleep Bench, assessing downstream performance across four tasks: sleep staging, apnea diagnosis, age estimation, and disease and mortality prediction. Our results show that multiple pretraining methods achieve comparable performance for sleep staging, apnea diagnosis, and age estimation. However, for mortality and disease prediction, contrastive learning significantly outperforms other approaches while also converging faster during pretraining. To facilitate reproducibility and advance sleep research, we will release Stanford Sleep Bench along with pretrained model weights, training pipelines, and evaluation code.