Using LLMs to Advance the Cognitive Science of Collectives

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While large language models (LLMs) are widely applied in individual cognition research, their systematic use in collective cognition remains underexplored. Method: This paper pioneers the use of LLMs as a “computational sandbox” and “theoretical probe” for collective cognition, employing multi-agent simulation, prompt-driven group reasoning experiments, and a cognitive interpretability analysis framework to address methodological challenges arising from complex group interactions. We identify structural bias risks inherent in LLM-based simulations of group dynamics and systematically evaluate LLM capabilities—across concept generation, consensus evolution, and error cascade modeling—against empirically grounded cognitive benchmarks. Contribution/Results: The work establishes a novel, LLM-based paradigm for collective cognition research, offering a scalable, interpretable, and empirically verifiable methodology that bridges micro- and macro-cognitive scales in cognitive science.

Technology Category

Application Category

📝 Abstract
LLMs are already transforming the study of individual cognition, but their application to studying collective cognition has been underexplored. We lay out how LLMs may be able to address the complexity that has hindered the study of collectives and raise possible risks that warrant new methods.
Problem

Research questions and friction points this paper is trying to address.

Applying LLMs to study collective cognition complexity
Addressing underexplored collective cognition research gaps
Identifying risks in LLM-based collective cognition studies
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs study collective cognition complexity
LLMs address underexplored collective cognition
New methods mitigate LLM risks
🔎 Similar Papers
No similar papers found.