π€ AI Summary
This work proposes a waveform-conditioned generative framework that achieves, for the first time, cross-modal synthesis of sleep electroencephalography (EEG) from respiratory signals, enabling contactless neurophysiological assessment. By integrating discrete tokenization with large-scale multimodal physiological modeling, the model is trained on data from over 28,000 individuals and achieves a mean absolute error (MAE) of only 7% in EEG spectrogram reconstruction. The synthesized EEG signals perform nearly on par with real EEG in downstream tasks such as age estimation, gender classification, and sleep staging, substantially outperforming baseline approaches that rely directly on respiratory signals. Furthermore, the framework successfully generalizes to wireless radio-frequency reflection signals, establishing a novel paradigm for unobtrusive physiological monitoring.
π Abstract
This paper introduces a novel cross-physiology translation task: synthesizing sleep electroencephalography (EEG) from respiration signals. To address the significant complexity gap between the two modalities, we propose a waveform-conditional generative framework that preserves fine-grained respiratory dynamics while constraining the EEG target space through discrete tokenization. Trained on over 28,000 individuals, our model achieves a 7% Mean Absolute Error in EEG spectrogram reconstruction. Beyond reconstruction, the synthesized EEG supports downstream tasks with performance comparable to ground truth EEG on age estimation (MAE 5.0 vs. 5.1 years), sex detection (AUROC 0.81 vs. 0.82), and sleep staging (Accuracy 0.84 vs. 0.88), significantly outperforming baselines trained directly on breathing. Finally, we demonstrate that the framework generalizes to contactless sensing by synthesizing EEG from wireless radio-frequency reflections, highlighting the feasibility of remote, non-contact neurological assessment during sleep.