🤖 AI Summary
This work addresses the limitation of current large language models that treat electronic health records (EHR) as static text, thereby neglecting the dynamic evolution of patient states over time and in response to interventions. To overcome this, the study introduces the world model paradigm into longitudinal EHR modeling and proposes SMB-Structure, a novel framework that integrates sequence fine-tuning (SFT) with a joint embedding predictive architecture (JEPA). This approach enables the prediction of an entire patient trajectory in latent space using only an initial representation, circumventing the constraints of conventional autoregressive methods. Experiments on the MSK cohort (23,319 oncology patients) and the INSPECT cohort (19,402 pulmonary embolism patients) demonstrate that the learned representations effectively capture disease progression dynamics and significantly outperform autoregressive baselines on highly heterogeneous and complex clinical tasks.
📝 Abstract
Large language models (LLMs) trained with next-word-prediction have achieved success as clinical foundation models. Representations from these language backbones yield strong linear probe performance across biomedical tasks, suggesting that patient semantics emerge from next-token prediction at scale. However, this paradigm treats patients as a document to be summarized rather than a dynamical system to be simulated; a patient's trajectory emerges from their state evolving under interventions and time, requiring models that simulate dynamics rather than predict tokens. To address this, we introduce SMB-Structure, a world model for structured EHR that grounds a joint-embedding prediction architecture (JEPA) with next-token prediction (SFT). SFT grounds our model to reconstruct future patient states in token space, while JEPA predicts those futures in latent space from the initial patient representation alone, forcing trajectory dynamics to be encoded before the next state is observed. We validate across two large-scale cohorts: Memorial Sloan Kettering (23,319 oncology patients; 323,000+ patient-years) and INSPECT (19,402 pulmonary embolism patients). Using a linear probe evaluated at multiple points along the disease trajectory, we demonstrate that our training paradigm learns embeddings that capture disease dynamics not recoverable by autoregressive baselines, enabling SMB-Structure to achieve competitive performance on complex tasks characterized by high patient heterogeneity. Model weights are available at https://huggingface.co/standardmodelbio/SMB-v1-1.7B-Structure.