Individualized Cognitive Simulation in Large Language Models: Evaluating Different Cognitive Representation Methods

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the capability boundaries of large language models (LLMs) in individualized cognitive simulation (ICS)—specifically, their ability to model an author’s idiosyncratic thought processes, not merely surface-level stylistic patterns. To this end, we construct the first novel-author dataset explicitly designed for ICS and propose a multidimensional evaluation framework comprising 11 cognitively grounded conditions. Methodologically, we innovatively compare representation strategies—including linguistic feature extraction, conceptual mapping, and static persona modeling—and introduce a dynamic hybrid representation integrating both linguistic and conceptual features. Results show that this dynamic representation achieves superior performance in author-style imitation, outperforming static persona baselines; however, LLMs remain fundamentally limited in simulating deep narrative-structural cognition. This work establishes the first systematic task formulation for ICS, empirically identifies critical bottlenecks in individualized cognitive modeling by LLMs, and delineates concrete pathways for improvement.

Technology Category

Application Category

📝 Abstract
Individualized cognitive simulation (ICS) aims to build computational models that approximate the thought processes of specific individuals. While large language models (LLMs) convincingly mimic surface-level human behavior such as role-play, their ability to simulate deeper individualized cognitive processes remains poorly understood. To address this gap, we introduce a novel task that evaluates different cognitive representation methods in ICS. We construct a dataset from recently published novels (later than the release date of the tested LLMs) and propose an 11-condition cognitive evaluation framework to benchmark seven off-the-shelf LLMs in the context of authorial style emulation. We hypothesize that effective cognitive representations can help LLMs generate storytelling that better mirrors the original author. Thus, we test different cognitive representations, e.g., linguistic features, concept mappings, and profile-based information. Results show that combining conceptual and linguistic features is particularly effective in ICS, outperforming static profile-based cues in overall evaluation. Importantly, LLMs are more effective at mimicking linguistic style than narrative structure, underscoring their limits in deeper cognitive simulation. These findings provide a foundation for developing AI systems that adapt to individual ways of thinking and expression, advancing more personalized and human-aligned creative technologies.
Problem

Research questions and friction points this paper is trying to address.

Evaluating cognitive representation methods for individualized thought simulation
Assessing LLMs' ability to mimic authorial style and narrative structure
Developing AI systems that adapt to individual thinking and expression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining conceptual and linguistic features for cognitive simulation
Evaluating cognitive representations using authorial style emulation
Testing linguistic features and concept mappings in storytelling