🤖 AI Summary
Decision-makers frequently exhibit overreliance on algorithmic recommendations, and conventional eXplainable AI (XAI) methods often fail to mitigate—sometimes even exacerbate—this dependency.
Method: This study introduces a reflective questioning taxonomy for human-AI collaborative decision-making, the first systematic, critical-reflection-oriented framework for structured questioning. It innovatively integrates Socratic questioning with human-centered XAI, shifting XAI’s paradigm from “explanation generation” to “reflection scaffolding,” thereby aligning with the EU AI Act’s stipulation of meaningful human oversight.
Contribution/Results: Validated through clinical decision-making scenario modeling and educational empirical evaluation, the taxonomy significantly enhances learners’ depth of reflection and decisional autonomy. It delivers a deployable cognitive support tool for high-stakes domains, advancing both theoretical foundations and practical implementation of responsible AI-assisted decision-making.
📝 Abstract
Decision-makers run the risk of relying too much on machine recommendations, which is associated with lower cognitive engagement. Reflection has been shown to increase cognitive engagement and improve critical thinking and reasoning and therefore decision-making. However, there is currently no approach to support reflection in machine-assisted decision-making. We therefore present a taxonomy that serves to systematically create questions related to machine-assisted decision-making that promote reflection and thus cognitive engagement and ultimately a deliberate decision-making process. Our taxonomy builds on a taxonomy of Socratic questions and a question bank for human-centred explainable AI (XAI), and illustrates how XAI techniques can be utilised and repurposed to formulate questions. As a use case, we focus on clinical decision-making. An evaluation in education confirms the applicability and expected benefits of our taxonomy. Our work contributes to the growing research on human-AI interaction that goes beyond the paradigm of machine recommendations and explanations and aims to enable effective human oversight as required by the European AI Act.