Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual Users

📅 2025-05-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The Rashomon effect in interpretable machine learning—where multiple models achieve comparable predictive performance yet yield fundamentally different explanations—hinders personalized interpretability. Method: This paper proposes a dynamic Generalized Additive Model (GAM) configuration framework tailored to individual users’ interpretability needs. It introduces contextual multi-armed bandits—a first in interpretable model selection—to construct an online, feedback-driven personalization system that adaptively selects GAM structures while preserving both predictive accuracy and high intelligibility. Contribution/Results: In a 108-participant online user study, the personalized group exhibited statistically significant diversity in selected GAM configurations, while both personalized and baseline groups reported strong subjective understanding. These results empirically validate the core claim that personalization does not compromise interpretability—marking a paradigm shift from one-size-fits-all to user-adaptive explanation strategies.

Technology Category

Application Category

📝 Abstract
The Rashomon effect describes the observation that in machine learning (ML) multiple models often achieve similar predictive performance while explaining the underlying relationships in different ways. This observation holds even for intrinsically interpretable models, such as Generalized Additive Models (GAMs), which offer users valuable insights into the model's behavior. Given the existence of multiple GAM configurations with similar predictive performance, a natural question is whether we can personalize these configurations based on users' needs for interpretability. In our study, we developed an approach to personalize models based on contextual bandits. In an online experiment with 108 users in a personalized treatment and a non-personalized control group, we found that personalization led to individualized rather than one-size-fits-all configurations. Despite these individual adjustments, the interpretability remained high across both groups, with users reporting a strong understanding of the models. Our research offers initial insights into the potential for personalizing interpretable ML.
Problem

Research questions and friction points this paper is trying to address.

Personalizing interpretable ML models for individual user needs
Addressing Rashomon effect via tailored model configurations
Balancing predictive performance and interpretability in GAMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalizing interpretable models using contextual bandits
Adjusting GAM configurations based on user needs
Maintaining high interpretability with individualized models
🔎 Similar Papers
No similar papers found.