🤖 AI Summary
This study investigates how missing-data patterns affect zero-shot medical prediction performance of large language models (LLMs). Using MIMIC-IV and Columbia Medical Center data, we conduct systematic zero-shot prompting experiments, compare missingness encoding strategies (e.g., [MASK] tokens vs. explicit natural-language descriptions), and perform model calibration analysis. Results reveal heterogeneous effects of explicit missingness representation across model scales: larger LLMs benefit, while smaller ones degrade—contrary to conventional assumptions. Standard evaluation metrics obscure missingness-induced biases, leading to miscalibration. Our key contributions are threefold: (1) the first empirical identification of scale-dependent LLM responses to informative missingness; (2) a novel aggregation analysis framework integrating prompt engineering, calibration diagnostics, and ablation studies; and (3) a theoretical interpretation grounded in parameter-count–capacity trade-offs. We emphasize that transparent modeling of missing-data mechanisms is essential for reliable clinical deployment of LLMs.
📝 Abstract
Data collection often reflects human decisions. In healthcare, for instance, a referral for a diagnostic test is influenced by the patient's health, their preferences, available resources, and the practitioner's recommendations. Despite the extensive literature on the informativeness of missingness, its implications on the performance of Large Language Models (LLMs) have not been studied. Through a series of experiments on data from Columbia University Medical Center, a large urban academic medical center, and MIMIC-IV, we demonstrate that patterns of missingness significantly impact zero-shot predictive performance. Notably, the explicit inclusion of missingness indicators at prompting benefits some while hurting other LLMs' zero-shot predictive performance and calibration, suggesting an inconsistent impact. The proposed aggregated analysis and theoretical insights suggest that larger models benefit from these interventions, while smaller models can be negatively impacted. The LLM paradigm risks obscuring the impact of missingness, often neglected even in conventional ML, even further. We conclude that there is a need for more transparent accounting and systematic evaluation of the impact of representing (informative) missingness on downstream performance.