When Pattern-by-Pattern Works: Theoretical and Empirical Insights for Logistic Models with Missing Values

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of response prediction in logistic regression under missing data mechanisms—Missing Completely at Random (MCAR), Missing at Random (MAR), and Missing Not at Random (MNAR). We propose and systematically analyze the Pattern-by-Pattern (PbP) prediction strategy, which conditions predictions on observed-data patterns. We theoretically establish that PbP consistently approximates the Bayes-optimal prediction probability under all three missingness mechanisms, thereby providing a statistically valid, mechanism-agnostic paradigm for missing-data prediction. Empirical evaluation against standard baselines—including complete-case analysis, mean imputation, EM-based estimation, and multiple imputation via MICE with random forests (MICE.RF.Y)—reveals nuanced performance trade-offs: mean imputation exhibits robustness in small samples; PbP achieves superior accuracy in large samples with approximately Gaussian-mixture covariates; and MICE.RF.Y excels when strong nonlinear feature dependencies exist. The work bridges theoretical rigor with practical applicability in missing-data inference.

Technology Category

Application Category

📝 Abstract
Predicting a response with partially missing inputs remains a challenging task even in parametric models, since parameter estimation in itself is not sufficient to predict on partially observed inputs. Several works study prediction in linear models. In this paper, we focus on logistic models, which present their own difficulties. From a theoretical perspective, we prove that a Pattern-by-Pattern strategy (PbP), which learns one logistic model per missingness pattern, accurately approximates Bayes probabilities in various missing data scenarios (MCAR, MAR and MNAR). Empirically, we thoroughly compare various methods (constant and iterative imputations, complete case analysis, PbP, and an EM algorithm) across classification, probability estimation, calibration, and parameter inference. Our analysis provides a comprehensive view on the logistic regression with missing values. It reveals that mean imputation can be used as baseline for low sample sizes, and improved performance is obtained via nonlinear multiple iterative imputation techniques with the labels (MICE.RF.Y). For large sample sizes, PbP is the best method for Gaussian mixtures, and we recommend MICE.RF.Y in presence of nonlinear features.
Problem

Research questions and friction points this paper is trying to address.

Predicting responses with partially missing inputs in logistic models
Evaluating Pattern-by-Pattern strategy for accurate Bayes approximation
Comparing imputation methods for classification and parameter inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pattern-by-Pattern strategy for missing data
Nonlinear multiple iterative imputation techniques
Comparison of methods for logistic regression
🔎 Similar Papers
No similar papers found.