🤖 AI Summary
Modeling wearable-derived motion time-series data for mental health assessment remains challenging due to poor interpretability and limited generalizability. Method: We propose PAT—the first lightweight, interpretable temporal foundation model tailored for clinical-grade actigraphy. Built upon a Transformer architecture with temporal patch embedding, PAT undergoes self-supervised pretraining on an unprecedented scale of motion data from 29,307 U.S. participants. Crucially, it pioneers the systematic transfer of language modeling paradigms to psychiatric time-series analysis. Contribution/Results: PAT achieves state-of-the-art performance across multiple mental health prediction tasks—including depression and anxiety—while maintaining low parameter count and high inference efficiency. Its design explicitly supports clinically grounded interpretability, enabling actionable insights for practitioners. The model is open-sourced and has been widely adopted by both academic and industrial stakeholders.
📝 Abstract
Pretrained foundation models and transformer architectures have driven the success of large language models (LLMs) and other modern AI breakthroughs. However, similar advancements in health data modeling remain limited due to the need for innovative adaptations. Wearable movement data offers a valuable avenue for exploration, as it's a core feature in nearly all commercial smartwatches, well established in clinical and mental health research, and the sequential nature of the data shares similarities to language. We introduce the Pretrained Actigraphy Transformer (PAT), the first open source foundation model designed for time-series wearable movement data. Leveraging transformer-based architectures and novel techniques, such as patch embeddings, and pretraining on data from 29,307 participants in a national U.S. sample, PAT achieves state-of-the-art performance in several mental health prediction tasks. PAT is also lightweight and easily interpretable, making it a robust tool for mental health research. GitHub: https://github.com/njacobsonlab/Pretrained-Actigraphy-Transformer/