Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse

📅 2024-10-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing feature importance methods (e.g., SHAP, LIME) lack operational grounding in high-stakes decision-making—such as credit lending—rendering them inadequate for guiding actionable user interventions to improve predictions. Method: We introduce *feature responsiveness*: a model-agnostic measure quantifying the probability that varying a single feature alone drives the prediction to a desired outcome (e.g., loan approval). Our framework integrates counterfactual search with Monte Carlo estimation, requires no gradients or model retraining, and automatically identifies non-appealable cases where predictions are invariant to all feasible interventions. Contribution/Results: Evaluated on real-world credit data, responsiveness-based explanations increase users’ success rate in altering predictions by 3.2× over SHAP/LIME, while precisely identifying 27% of instances with no recourse—thereby strengthening algorithmic accountability and consumer protection.

Technology Category

Application Category

📝 Abstract
Machine learning models routinely automate decisions in applications like lending and hiring. In such settings, consumer protection rules require companies that deploy models to explain predictions to decision subjects. These rules are motivated, in part, by the belief that explanations can promote recourse by revealing information that individuals can use to contest or improve their outcomes. In practice, many companies comply with these rules by providing individuals with a list of the most important features for their prediction, which they identify based on feature importance scores from feature attribution methods such as SHAP or LIME. In this work, we show how these practices can undermine consumers by highlighting features that would not lead to an improved outcome and by explaining predictions that cannot be changed. We propose to address these issues by highlighting features based on their responsiveness score -- i.e., the probability that an individual can attain a target prediction by changing a specific feature. We develop efficient methods to compute responsiveness scores for any model and any dataset. We conduct an extensive empirical study on the responsiveness of explanations in lending. Our results show that standard practices in consumer finance can backfire by presenting consumers with reasons without recourse, and demonstrate how our approach improves consumer protection by highlighting responsive features and identifying fixed predictions.
Problem

Research questions and friction points this paper is trying to address.

Explaining ML predictions for consumer recourse in lending/hiring
Current feature importance methods may mislead consumers
Proposing responsiveness scores to identify actionable feature changes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-agnostic feature responsiveness scores
Efficient computation for any model
Improves consumer protection in lending
🔎 Similar Papers
No similar papers found.