🤖 AI Summary
This paper identifies a fundamental epistemic divergence between large language models (LLMs) and human cognition in knowledge acquisition and judgment formation: LLMs are inherently stochastic pattern-completion systems operating over high-dimensional linguistic transition graphs, not embodied, motivation-driven cognitive agents. Method: We introduce the novel “epistemic fault line” analytical framework and formalize the concept of *Epistemia*, systematically identifying and axiomatizing seven structural ruptures—semantic grounding, symbolic interpretation, embodied experience, intrinsic motivation, causal reasoning, metacognitive capacity, and value embedding—through integrated historical-philosophical analysis, cross-disciplinary cognitive comparison, graph-theoretic modeling, and epistemological critique. Contribution/Results: The work delivers a theoretically grounded, operationally actionable foundation for AI capability assessment, governance design, and public epistemic literacy development, accompanied by a seven-dimensional differential taxonomy (“Epistemic Fault Line Spectrum”) for rigorous comparative analysis.
📝 Abstract
Large language models (LLMs) are widely described as artificial intelligence, yet their epistemic profile diverges sharply from human cognition. Here we show that the apparent alignment between human and machine outputs conceals a deeper structural mismatch in how judgments are produced. Tracing the historical shift from symbolic AI and information filtering systems to large-scale generative transformers, we argue that LLMs are not epistemic agents but stochastic pattern-completion systems, formally describable as walks on high-dimensional graphs of linguistic transitions rather than as systems that form beliefs or models of the world. By systematically mapping human and artificial epistemic pipelines, we identify seven epistemic fault lines, divergences in grounding, parsing, experience, motivation, causal reasoning, metacognition, and value. We call the resulting condition Epistemia: a structural situation in which linguistic plausibility substitutes for epistemic evaluation, producing the feeling of knowing without the labor of judgment. We conclude by outlining consequences for evaluation, governance, and epistemic literacy in societies increasingly organized around generative AI.