🤖 AI Summary
This work identifies a systematic misalignment between large language models’ (LLMs) actual knowledge boundaries and human subjective perceptions of their capabilities. To quantify this gap, the study employs multi-source factual probing, controlled prompt engineering, and large-scale crowdsourced metacognitive experiments—introducing “cognitive alignment” as a novel evaluation paradigm for assessing LLM–human epistemic congruence. Results reveal that humans overestimate LLM performance in causal reasoning and domain-specific tasks by 37%, while underestimating their pattern memorization capacity. Furthermore, default explanations and explanation length significantly bias users’ confidence in model outputs—even when explanation quality does not improve answer accuracy. Building on these findings, the paper proposes an explainability calibration framework that enhances reliability and decision quality in human–model collaboration.