Epistemology gives a Future to Complementarity in Human-AI Interactions

📅 2026-01-14
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitation of existing research that reduces human–AI complementarity to post-hoc performance metrics, thereby overlooking its deeper epistemic, sociotechnical, and cost-benefit dimensions. It introduces computational reliabilism—an epistemological framework—into the discourse on human–AI collaboration, reconceptualizing complementarity as evidence that the human–AI system functions as a reliable cognitive process. Embedded within a “justifiable AI” framework, this approach supports the practical reasoning of individuals affected by AI decisions. Through theoretical modeling and analysis of collaborative decision-making, the work establishes a robust theoretical anchor for complementarity, positioning it as a tool for calibrating cognitive reliability and integrating into broader trustworthy AI evaluation systems. This significantly enhances the ethical and practical value of human–AI complementarity in high-stakes domains such as healthcare and management.

Technology Category

Application Category

📝 Abstract
Human-AI complementarity is the claim that a human supported by an AI system can outperform either alone in a decision-making process. Since its introduction in the human-AI interaction literature, it has gained traction by generalizing the reliance paradigm and by offering a more practical alternative to the contested construct of'trust in AI.'Yet complementarity faces key theoretical challenges: it lacks precise theoretical anchoring, it is formalized just as a post hoc indicator of relative predictive accuracy, it remains silent about other desiderata of human-AI interactions and it abstracts away from the magnitude-cost profile of its performance gain. As a result, complementarity is difficult to obtain in empirical settings. In this work, we leverage epistemology to address these challenges by reframing complementarity within the discourse on justificatory AI. Drawing on computational reliabilism, we argue that historical instances of complementarity function as evidence that a given human-AI interaction is a reliable epistemic process for a given predictive task. Together with other reliability indicators assessing the alignment of the human-AI team with the epistemic standards and socio-technical practices, complementarity contributes to the degree of reliability of human-AI teams when generating predictions. This supports the practical reasoning of those affected by these outputs -- patients, managers, regulators, and others. In summary, our approach suggests that the role and value of complementarity lies not in providing a relative measure of predictive accuracy, but in helping calibrate decision-making to the reliability of AI-supported processes that increasingly shape everyday life.
Problem

Research questions and friction points this paper is trying to address.

human-AI complementarity
epistemology
reliability
predictive accuracy
justificatory AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

epistemology
computational reliabilism
human-AI complementarity
justificatory AI
reliable epistemic process