Position: Human-Centric AI Requires a Minimum Viable Level of Human Understanding

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
As AI systems grow increasingly capable, human ability to understand, verify, and intervene in their operation diminishes, undermining the effectiveness of oversight. This work proposes the “Cognitive Integrity Threshold” (CIT)—a formalized boundary defining the minimal level of human understanding required to sustain meaningful supervision, autonomy, and accountability in human–AI collaboration. Building on three interrelated dimensions—verifiability, understanding-preserving interaction, and institutional governance—the study develops an operational framework that integrates insights from human–computer interaction, cognitive science, and AI governance. Moving beyond conventional approaches centered on transparency and control, this research offers a novel paradigm for human-centered AI design in accountability-sensitive contexts, ensuring that human oversight remains substantive rather than merely symbolic as automation advances.

Technology Category

Application Category

📝 Abstract
AI systems increasingly produce fluent, correct, end-to-end outcomes. Over time, this erodes users'ability to explain, verify, or intervene. We define this divergence as the Capability-Comprehension Gap: a decoupling where assisted performance improves while users'internal models deteriorate. This paper argues that prevailing approaches to transparency, user control, literacy, and governance do not define the foundational understanding humans must retain for oversight under sustained AI delegation. To formalize this, we define the Cognitive Integrity Threshold (CIT) as the minimum comprehension required to preserve oversight, autonomy, and accountable participation under AI assistance. CIT does not require full reasoning reconstruction, nor does it constrain automation. It identifies the threshold beyond which oversight becomes procedural and contestability fails. We operatinalize CIT through three functional dimensions: (i) verification capacity, (ii) comprehension-preserving interaction, and (iii) institutional scaffolds for governance. This motivates a design and governance agenda that aligns human-AI interaction with cognitive sustainability in responsibility-critical settings.
Problem

Research questions and friction points this paper is trying to address.

Capability-Comprehension Gap
Cognitive Integrity Threshold
human-AI interaction
oversight
accountability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cognitive Integrity Threshold
Capability-Comprehension Gap
human-AI interaction
accountable AI
cognitive sustainability
🔎 Similar Papers
No similar papers found.