Rethinking Tokenization for Clinical Time Series: When Less is More

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates how tokenization strategies affect Transformer performance in clinical time-series modeling. Using the MIMIC-IV dataset, we conduct controlled ablation experiments comparing explicit temporal encoding schemes, numerical embedding approaches, and encoder training paradigms across four clinical prediction tasks. Key findings are: (1) The raw clinical code sequence alone contains sufficient predictive signal; explicit temporal encoding yields no statistically significant improvement. (2) Freezing pre-trained clinical encoders—such as Med-PaLM or ClinicalBERT embeddings—substantially outperforms end-to-end training, while requiring fewer parameters and enabling faster inference. (3) Larger-scale frozen encoders consistently enhance performance. Collectively, these results demonstrate that lightweight, frozen tokenization strategies achieve both computational efficiency and strong generalization, establishing a simple yet effective paradigm for clinical time-series modeling.

Technology Category

Application Category

📝 Abstract
Tokenization strategies shape how models process electronic health records, yet fair comparisons of their effectiveness remain limited. We present a systematic evaluation of tokenization approaches for clinical time series modeling using transformer-based architectures, revealing task-dependent and sometimes counterintuitive findings about temporal and value feature importance. Through controlled ablations across four clinical prediction tasks on MIMIC-IV, we demonstrate that explicit time encodings provide no consistent statistically significant benefit for the evaluated downstream tasks. Value features show task-dependent importance, affecting mortality prediction but not readmission, suggesting code sequences alone can carry sufficient predictive signal. We further show that frozen pretrained code encoders dramatically outperform their trainable counterparts while requiring dramatically fewer parameters. Larger clinical encoders provide consistent improvements across tasks, benefiting from frozen embeddings that eliminate computational overhead. Our controlled evaluation enables fairer tokenization comparisons and demonstrates that simpler, parameter-efficient approaches can, in many cases, achieve strong performance, though the optimal tokenization strategy remains task-dependent.
Problem

Research questions and friction points this paper is trying to address.

Evaluates tokenization strategies for clinical time series modeling
Assesses the impact of time and value features on predictions
Compares parameter-efficient versus complex tokenization approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simpler tokenization strategies outperform complex time encodings
Frozen pretrained code encoders reduce parameters while improving performance
Task-dependent value features influence mortality but not readmission prediction
🔎 Similar Papers
No similar papers found.
R
Rafi Al Attrach
Massachusetts Institute of Technology (MIT), USA; Technical University of Munich (TUM), Germany
Rajna Fani
Rajna Fani
TUM, MIT
Natural Language ProcessingMachine LearningHealthcare AIBias
D
David Restrepo
MICS, CentraleSupélec – Université Paris-Saclay, France
Y
Yugang Jia
Massachusetts Institute of Technology (MIT), USA
P
Peter Schüffler
Institute of Pathology, Technical University of Munich, Germany; Munich Center for Machine Learning (MCML), Germany