🤖 AI Summary
This study examines whether algorithmic transparency—explaining model decision logic—can substitute for user adjustment of imperfect ML predictions to mitigate algorithm aversion. Method: A preregistered experiment (N = 280) is conducted, uniquely integrating visual explainability and prediction adjustability under controlled conditions to assess their independent and interactive effects. Contribution/Results: Adjustability significantly reduces algorithm aversion, replicating prior findings; transparency alone shows no statistically significant effect; and no synergistic interaction emerges—indicating independent underlying mechanisms. These results challenge the prevalent assumption that transparency can substitute for human intervention. The findings provide empirical guidance for algorithm design: enhancing user acceptance requires *both* adjustability and transparency as complementary, not substitutable, features.
📝 Abstract
Previous work has shown that allowing users to adjust a machine learning (ML) model's predictions can reduce aversion to imperfect algorithmic decisions. However, these results were obtained in situations where users had no information about the model's reasoning. Thus, it remains unclear whether interpretable ML models could further reduce algorithm aversion or even render adjustability obsolete. In this paper, we conceptually replicate a well-known study that examines the effect of adjustable predictions on algorithm aversion and extend it by introducing an interpretable ML model that visually reveals its decision logic. Through a pre-registered user study with 280 participants, we investigate how transparency interacts with adjustability in reducing aversion to algorithmic decision-making. Our results replicate the adjustability effect, showing that allowing users to modify algorithmic predictions mitigates aversion. Transparency's impact appears smaller than expected and was not significant for our sample. Furthermore, the effects of transparency and adjustability appear to be more independent than expected.