Supporting Data-Frame Dynamics in AI-assisted Decision Making

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI decision-support systems struggle to simultaneously accommodate continuous evidence updating and dynamic hypothesis evolution in high-stakes settings. To address this, we propose a hybrid active AI decision-support framework that—uniquely—integrates data framing theory with evaluative AI paradigms, establishing a hypothesis-driven, interpretable, and iterative human-AI closed-loop decision mechanism. The framework unifies concept bottleneck models, hybrid active interaction protocols, and sensemaking-informed data frame modeling to enable collaborative hypothesis generation, validation, and real-time refinement between clinicians and AI. Evaluated on a skin cancer diagnosis prototype, the framework significantly enhances decision transparency, human-AI collaboration efficiency, and clinical adaptability. It advances a novel, interpretable, and evolvable AI support paradigm tailored for high-risk, dynamic decision-making environments.

Technology Category

Application Category

📝 Abstract
High stakes decision-making often requires a continuous interplay between evolving evidence and shifting hypotheses, a dynamic that is not well supported by current AI decision support systems. In this paper, we introduce a mixed-initiative framework for AI assisted decision making that is grounded in the data-frame theory of sensemaking and the evaluative AI paradigm. Our approach enables both humans and AI to collaboratively construct, validate, and adapt hypotheses. We demonstrate our framework with an AI-assisted skin cancer diagnosis prototype that leverages a concept bottleneck model to facilitate interpretable interactions and dynamic updates to diagnostic hypotheses.
Problem

Research questions and friction points this paper is trying to address.

Supporting dynamic evidence-hypothesis interplay in AI decision-making
Enabling collaborative human-AI hypothesis construction and adaptation
Facilitating interpretable AI interactions for medical diagnosis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed-initiative framework for AI decision support
Data-frame theory and evaluative AI paradigm
Concept bottleneck model for interpretable interactions
🔎 Similar Papers
No similar papers found.