Advancing Cognitive Science with LLMs

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cognitive science has long suffered from conceptual ambiguity and knowledge fragmentation due to its inherently interdisciplinary nature. This paper proposes a human–AI collaborative knowledge integration paradigm centered on large language models (LLMs) as cognitive assistants—not replacements—leveraging multi-source literature to enable systematic concept analysis, formal theoretical modeling, context-sensitive measurement design, and individual-difference-aware representation. Unlike substitutional LLM applications, our approach emphasizes augmentation: facilitating cross-disciplinary conceptual bridging, enhancing methodological reproducibility, and supporting idiographic cognitive modeling. Empirical evaluation demonstrates significant improvements in theoretical compatibility, robustness of personalized cognitive models, and—critically—the first empirically grounded delineation of LLMs’ capabilities, limitations, and optimization pathways within cognitive science. The framework advances a novel, scalable methodology for cumulative theory building and systematic field development. (149 words)

Technology Category

Application Category

📝 Abstract
Cognitive science faces ongoing challenges in knowledge synthesis and conceptual clarity, in part due to its multifaceted and interdisciplinary nature. Recent advances in artificial intelligence, particularly the development of large language models (LLMs), offer tools that may help to address these issues. This review examines how LLMs can support areas where the field has historically struggled, including establishing cross-disciplinary connections, formalizing theories, developing clear measurement taxonomies, achieving generalizability through integrated modeling frameworks, and capturing contextual and individual variation. We outline the current capabilities and limitations of LLMs in these domains, including potential pitfalls. Taken together, we conclude that LLMs can serve as tools for a more integrative and cumulative cognitive science when used judiciously to complement, rather than replace, human expertise.
Problem

Research questions and friction points this paper is trying to address.

Addressing knowledge synthesis challenges in cognitive science
Establishing cross-disciplinary connections and formalizing theories
Developing clear measurement taxonomies and capturing individual variation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs establish cross-disciplinary connections for cognitive science
LLMs formalize theories and develop measurement taxonomies
LLMs capture contextual variation while complementing human expertise