A Taxonomy of Questions for Critical Reflection in Machine-Assisted Decision-Making

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Decision-makers frequently exhibit overreliance on algorithmic recommendations, and conventional eXplainable AI (XAI) methods often fail to mitigate—sometimes even exacerbate—this dependency. Method: This study introduces a reflective questioning taxonomy for human-AI collaborative decision-making, the first systematic, critical-reflection-oriented framework for structured questioning. It innovatively integrates Socratic questioning with human-centered XAI, shifting XAI’s paradigm from “explanation generation” to “reflection scaffolding,” thereby aligning with the EU AI Act’s stipulation of meaningful human oversight. Contribution/Results: Validated through clinical decision-making scenario modeling and educational empirical evaluation, the taxonomy significantly enhances learners’ depth of reflection and decisional autonomy. It delivers a deployable cognitive support tool for high-stakes domains, advancing both theoretical foundations and practical implementation of responsible AI-assisted decision-making.

Technology Category

Application Category

📝 Abstract
Decision-makers run the risk of relying too much on machine recommendations, which is associated with lower cognitive engagement. Reflection has been shown to increase cognitive engagement and improve critical thinking and reasoning and therefore decision-making. However, there is currently no approach to support reflection in machine-assisted decision-making. We therefore present a taxonomy that serves to systematically create questions related to machine-assisted decision-making that promote reflection and thus cognitive engagement and ultimately a deliberate decision-making process. Our taxonomy builds on a taxonomy of Socratic questions and a question bank for human-centred explainable AI (XAI), and illustrates how XAI techniques can be utilised and repurposed to formulate questions. As a use case, we focus on clinical decision-making. An evaluation in education confirms the applicability and expected benefits of our taxonomy. Our work contributes to the growing research on human-AI interaction that goes beyond the paradigm of machine recommendations and explanations and aims to enable effective human oversight as required by the European AI Act.
Problem

Research questions and friction points this paper is trying to address.

Address overreliance on machine recommendations in decision-making
Develop a reflection machine to enhance critical thinking
Create a question taxonomy for human-centered AI collaboration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes a reflection machine for decision-making
Uses Socratic question taxonomy for engagement
Enhances human oversight beyond explainable AI
🔎 Similar Papers
No similar papers found.