How LLMs Comprehend Temporal Meaning in Narratives: A Case Study in Cognitive Evaluation of LLMs

📅 2025-07-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) comprehend temporal semantics in narratives, specifically whether their behavior reflects human-like cognition or sophisticated pattern matching. Method: We introduce “Expert-in-the-Loop Probing”—a novel probing paradigm integrating standardized narrative stimuli, cognitive experimental design, and human baseline controls—to systematically evaluate LLMs across genre identification, temporal sequence representation, and aspectual causal inference. Contribution/Results: We find that LLMs heavily rely on prototypical cues, exhibit low consistency in tense-aspect judgment, and significantly underperform humans in aspect-based causal reasoning. Their temporal semantic representations are unstable and lack cognitive robustness. This work provides the first empirical demonstration of fundamental limitations in LLMs’ narrative temporal understanding and reveals a critical divergence from human cognition. It establishes a reproducible methodological framework and empirically grounded benchmark for evaluating and improving temporal reasoning in language models.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) exhibit increasingly sophisticated linguistic capabilities, yet the extent to which these behaviors reflect human-like cognition versus advanced pattern recognition remains an open question. In this study, we investigate how LLMs process the temporal meaning of linguistic aspect in narratives that were previously used in human studies. Using an Expert-in-the-Loop probing pipeline, we conduct a series of targeted experiments to assess whether LLMs construct semantic representations and pragmatic inferences in a human-like manner. Our findings show that LLMs over-rely on prototypicality, produce inconsistent aspectual judgments, and struggle with causal reasoning derived from aspect, raising concerns about their ability to fully comprehend narratives. These results suggest that LLMs process aspect fundamentally differently from humans and lack robust narrative understanding. Beyond these empirical findings, we develop a standardized experimental framework for the reliable assessment of LLMs' cognitive and linguistic capabilities.
Problem

Research questions and friction points this paper is trying to address.

Assess if LLMs process temporal meaning like humans
Evaluate LLMs' consistency in aspectual judgments
Test LLMs' causal reasoning from linguistic aspect
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expert-in-the-Loop probing pipeline
Targeted experiments on aspectual judgments
Standardized cognitive assessment framework
🔎 Similar Papers
No similar papers found.
Karin de Langis
Karin de Langis
PhD Candidate, University of Minnesota
Artificial IntelligenceRoboticsComputer Vision
Jong Inn Park
Jong Inn Park
University of Minnesota
Natural Language Processing
A
Andreas Schramm
Hamline University
B
Bin Hu
University of Minnesota
K
Khanh Chi Le
University of Minnesota
M
Michael Mensink
University of Wisconsin-Stout
A
Ahn Thu Tong
Hamline University
Dongyeop Kang
Dongyeop Kang
University of Minnesota
Natural Language Processing