Mind the data gap: Missingness Still Shapes Large Language Model Prognoses

📅 2025-11-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how missing-data patterns affect zero-shot medical prediction performance of large language models (LLMs). Using MIMIC-IV and Columbia Medical Center data, we conduct systematic zero-shot prompting experiments, compare missingness encoding strategies (e.g., [MASK] tokens vs. explicit natural-language descriptions), and perform model calibration analysis. Results reveal heterogeneous effects of explicit missingness representation across model scales: larger LLMs benefit, while smaller ones degrade—contrary to conventional assumptions. Standard evaluation metrics obscure missingness-induced biases, leading to miscalibration. Our key contributions are threefold: (1) the first empirical identification of scale-dependent LLM responses to informative missingness; (2) a novel aggregation analysis framework integrating prompt engineering, calibration diagnostics, and ablation studies; and (3) a theoretical interpretation grounded in parameter-count–capacity trade-offs. We emphasize that transparent modeling of missing-data mechanisms is essential for reliable clinical deployment of LLMs.

Technology Category

Application Category

📝 Abstract
Data collection often reflects human decisions. In healthcare, for instance, a referral for a diagnostic test is influenced by the patient's health, their preferences, available resources, and the practitioner's recommendations. Despite the extensive literature on the informativeness of missingness, its implications on the performance of Large Language Models (LLMs) have not been studied. Through a series of experiments on data from Columbia University Medical Center, a large urban academic medical center, and MIMIC-IV, we demonstrate that patterns of missingness significantly impact zero-shot predictive performance. Notably, the explicit inclusion of missingness indicators at prompting benefits some while hurting other LLMs' zero-shot predictive performance and calibration, suggesting an inconsistent impact. The proposed aggregated analysis and theoretical insights suggest that larger models benefit from these interventions, while smaller models can be negatively impacted. The LLM paradigm risks obscuring the impact of missingness, often neglected even in conventional ML, even further. We conclude that there is a need for more transparent accounting and systematic evaluation of the impact of representing (informative) missingness on downstream performance.
Problem

Research questions and friction points this paper is trying to address.

Studies missing data's effect on LLM predictions
Examines missingness indicators' inconsistent impact on models
Advocates for transparent evaluation of missingness representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Missingness indicators improve large models' zero-shot prediction
Aggregated analysis shows inconsistent impact across model sizes
Transparent accounting needed for missingness representation effects
🔎 Similar Papers
No similar papers found.
Y
Yuta Kobayashi
Department of Biomedical Informatics, Columbia University, New York
V
Vincent Jeanselme
Department of Biomedical Informatics, Columbia University, New York
Shalmali Joshi
Shalmali Joshi
Columbia University
Artificial IntelligenceMachine LearningBiomedical SciencesClinical Informatics