🤖 AI Summary
This work addresses missing data in real-world tabular datasets containing mixed continuous and categorical variables by proposing a Statistical-Neural Interaction (SNI) framework that simultaneously ensures imputation accuracy and model interpretability. SNI introduces a Controllable Prior Feature Attention (CPFA) module that dynamically integrates statistical correlation priors with neural attention mechanisms, endogenously generating a feature dependency matrix to reveal inter-variable relationships without requiring post-hoc explanation. The approach is particularly robust in Missing Not At Random (MNAR) scenarios through a mask-aware variant. Experimental results across six real-world datasets demonstrate that SNI achieves superior performance on continuous variable imputation; while its accuracy on categorical variables slightly lags behind purely accuracy-optimized methods, it offers interpretable dependency diagnostics and a transparent trade-off mechanism between statistical and neural components.
📝 Abstract
Real-world tabular databases routinely combine continuous measurements and categorical records, yet missing entries are pervasive and can distort downstream analysis. We propose Statistical-Neural Interaction (SNI), an interpretable mixed-type imputation framework that couples correlation-derived statistical priors with neural feature attention through a Controllable-Prior Feature Attention (CPFA) module. CPFA learns head-wise prior-strength coefficients $\{\lambda_h\}$ that softly regularize attention toward the prior while allowing data-driven deviations when nonlinear patterns appear to be present in the data. Beyond imputation, SNI aggregates attention maps into a directed feature-dependency matrix that summarizes which variables the imputer relied on, without requiring post-hoc explainers. We evaluate SNI against six baselines (Mean/Mode, MICE, KNN, MissForest, GAIN, MIWAE) on six datasets spanning ICU monitoring, population surveys, socio-economic statistics, and engineering applications. Under MCAR/strict-MAR at 30\% missingness, SNI is generally competitive on continuous metrics but is often outperformed by accuracy-first baselines (MissForest, MIWAE) on categorical variables; in return, it provides intrinsic dependency diagnostics and explicit statistical-neural trade-off parameters. We additionally report MNAR stress tests (with a mask-aware variant) and discuss computational cost, limitations -- particularly for severely imbalanced categorical targets -- and deployment scenarios where interpretability may justify the trade-off.