Do we Need Dozens of Methods for Real World Missing Value Imputation?

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world missing data imputation urgently requires a comprehensive evaluation framework beyond point-estimation metrics such as RMSE. This paper reframes imputation as a distributional forecasting task and introduces *Imputation Scores*—a novel metric quantifying how well imputed values preserve the underlying data distribution. For the first time, we systematically evaluate over a dozen imputation methods under realistic missingness mechanisms—not merely synthetic MCAR or MAR settings. Experiments span numerical and mixed-type datasets, employing the widely adopted iterative multiple imputation framework implemented in the *mice* R package, with unified benchmarking across both synthetic and real-world missing-data scenarios. Results demonstrate that iterative methods—particularly *mice*—significantly outperform single-imputation and deep learning approaches in distributional fidelity and robustness. Our framework provides practitioners with reproducible, interpretable criteria for method selection in practical applications. (149 words)

Technology Category

Application Category

📝 Abstract
Missing values pose a persistent challenge in modern data science. Consequently, there is an ever-growing number of publications introducing new imputation methods in various fields. While many studies compare imputation approaches, they often focus on a limited subset of algorithms and evaluate performance primarily through pointwise metrics such as RMSE, which are not suitable to measure the preservation of the true data distribution. In this work, we provide a systematic benchmarking method based on the idea of treating imputation as a distributional prediction task. We consider a large number of algorithms and, for the first time, evaluate them not only on synthetic missing mechanisms, but also on real-world missingness scenarios, using the concept of Imputation Scores. Finally, while the focus of previous benchmark has often been on numerical data, we also consider mixed data sets in our study. The analysis overwhelmingly confirms the superiority of iterative imputation algorithms, especially the methods implemented in the mice R package.
Problem

Research questions and friction points this paper is trying to address.

Evaluating numerous imputation methods for real-world missing data scenarios
Assessing algorithm performance through distributional prediction rather than pointwise metrics
Benchmarking imputation approaches on mixed datasets beyond just numerical data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic benchmarking treating imputation as distributional prediction
Evaluating algorithms using Imputation Scores on real-world scenarios
Confirming superiority of iterative imputation methods like mice
🔎 Similar Papers
No similar papers found.
K
Krystyna Grzesiak
Faculty of Mathematics and Computer Science, University of Wrocław
Christophe Muller
Christophe Muller
INRIA
statisticscausalitymissing values
Julie Josse
Julie Josse
Senior Researcher Inria,
Missing valuesLow rank matrixcausal inferenceR
J
Jeffrey Näf
Research Institute for Statistics and Information Science, University of Geneva