🤖 AI Summary
While large language models (LLMs) are widely applied in individual cognition research, their systematic use in collective cognition remains underexplored. Method: This paper pioneers the use of LLMs as a “computational sandbox” and “theoretical probe” for collective cognition, employing multi-agent simulation, prompt-driven group reasoning experiments, and a cognitive interpretability analysis framework to address methodological challenges arising from complex group interactions. We identify structural bias risks inherent in LLM-based simulations of group dynamics and systematically evaluate LLM capabilities—across concept generation, consensus evolution, and error cascade modeling—against empirically grounded cognitive benchmarks. Contribution/Results: The work establishes a novel, LLM-based paradigm for collective cognition research, offering a scalable, interpretable, and empirically verifiable methodology that bridges micro- and macro-cognitive scales in cognitive science.
📝 Abstract
LLMs are already transforming the study of individual cognition, but their application to studying collective cognition has been underexplored. We lay out how LLMs may be able to address the complexity that has hindered the study of collectives and raise possible risks that warrant new methods.