🤖 AI Summary
This study addresses how the “frictionless” design of generative AI intensifies user dependency, eroding cognitive agency and amplifying automation bias, thereby threatening human cognitive sovereignty. To counter this, the paper proposes a “scaffolded cognitive friction” framework that reconceptualizes friction as a technical necessity for preserving cognitive sovereignty. It introduces a novel multi-agent computational “devil’s advocate” system that proactively injects beneficial cognitive tension to mitigate heuristic-driven decision-making. Using multimodal neurobehavioral measures—including eye-tracking transfer entropy, task-evoked pupillary responses, functional near-infrared spectroscopy (fNIRS), and hierarchical drift-diffusion modeling—the study quantifies cognitive effort and its decoupling from decision outcomes. Empirical analysis reveals a sharp divergence in early 2026 AI–HCI research trends: a 19.6% surge in autonomous agent optimization alongside a 13.1% decline in cognitive sovereignty studies, corroborating the “agent takeover” trajectory and offering a critical technical pathway for global AI governance.
📝 Abstract
The proliferation of Generative Artificial Intelligence has transformed benign cognitive offloading into a systemic risk of cognitive agency surrender. Driven by the commercial dogma of "zero-friction" design, highly fluent AI interfaces actively exploit human cognitive miserliness, prematurely satisfying the need for cognitive closure and inducing severe automation bias. To empirically quantify this epistemic erosion, we deployed a zero-shot semantic classification pipeline ($τ=0.7$) on 1,223 high-confidence AI-HCI papers from 2023 to early 2026. Our analysis reveals an escalating "agentic takeover": a brief 2025 surge in research defending human epistemic sovereignty (19.1%) was abruptly suppressed in early 2026 (13.1%) by an explosive shift toward optimizing autonomous machine agents (19.6%), while frictionless usability maintained a structural hegemony (67.3%). To dismantle this trap, we theorize "Scaffolded Cognitive Friction," repurposing Multi-Agent Systems (MAS) as explicit cognitive forcing functions (e.g., computational Devil's Advocates) to inject germane epistemic tension and disrupt heuristic execution. Furthermore, we outline a multimodal computational phenotyping agenda -- integrating gaze transition entropy, task-evoked pupillometry, fNIRS, and Hierarchical Drift Diffusion Modeling (HDDM) -- to mathematically decouple decision outcomes from cognitive effort. Ultimately, intentionally designed friction is not merely a psychological intervention, but a foundational technical prerequisite for enforcing global AI governance and preserving societal cognitive resilience.