🤖 AI Summary
Clinical machine learning models often encounter missing features at inference time; conventional imputation or deletion strategies conflict with clinicians’ reliance on observed data and clinical intuition, undermining model trustworthiness and interpretability.
Method: Based on an empirical study involving 55 trauma surgeons—including surveys and analysis of real-world decision-making behavior—we systematically characterize expert preferences for handling missing values. We then propose “native missing-value support” as a core human-centered design principle for interpretable machine learning (IML), and comparatively evaluate three inherently missing-aware models: decision trees, rule sets, and prototype networks.
Contribution/Results: Native missing-value handling significantly improves clinician trust and explanation consistency, whereas imputation degrades both. This work establishes foundational human-centered design guidelines and a paradigm shift for deploying IML systems in clinical practice.
📝 Abstract
Inherently interpretable machine learning (IML) models offer valuable support for clinical decision-making but face challenges when features contain missing values. Traditional approaches, such as imputation or discarding incomplete records, are often impractical in scenarios where data is missing at test time. We surveyed 55 clinicians from 29 French trauma centers, collecting 20 complete responses to study their interaction with three IML models in a real-world clinical setting for predicting hemorrhagic shock with missing values. Our findings reveal that while clinicians recognize the value of interpretability and are familiar with common IML approaches, traditional imputation techniques often conflict with their intuition. Instead of imputing unobserved values, they rely on observed features combined with medical intuition and experience. As a result, methods that natively handle missing values are preferred. These findings underscore the need to integrate clinical reasoning into future IML models to enhance human-computer interaction.