Analyzing Cognitive Differences Among Large Language Models through the Lens of Social Worldview

📅 2025-05-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates implicit social cognitive attitudes—such as authority, egalitarianism, autonomy, and fatalism—embedded in large language models (LLMs), and how these attitudes are modulated by social cues. Method: We introduce the first quantifiable Social Worldview Taxonomy (SWT), a culturally grounded, dimensional measurement framework, to systematically model and evaluate the cognitive profiles of 28 mainstream LLMs. Using social referent experimental designs, prompt engineering, and cross-model consistency analysis, we assess attitude stability and cue sensitivity. Contribution/Results: (1) LLMs exhibit stable, interpretable structural differences along non-ethical social dimensions; (2) explicit social cues significantly and reproducibly modulate attitude outputs (p < 0.001); (3) we publicly release the first benchmark and open-source toolkit for social worldview assessment. This work establishes a novel paradigm for understanding the nature of LLMs’ social intelligence and advancing controllable, value-aligned AI.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have become integral to daily life, widely adopted in communication, decision-making, and information retrieval, raising critical questions about how these systems implicitly form and express socio-cognitive attitudes or"worldviews". While existing research extensively addresses demographic and ethical biases, broader dimensions-such as attitudes toward authority, equality, autonomy, and fate-remain under-explored. In this paper, we introduce the Social Worldview Taxonomy (SWT), a structured framework grounded in Cultural Theory, operationalizing four canonical worldviews (Hierarchy, Egalitarianism, Individualism, Fatalism) into measurable sub-dimensions. Using SWT, we empirically identify distinct and interpretable cognitive profiles across 28 diverse LLMs. Further, inspired by Social Referencing Theory, we experimentally demonstrate that explicit social cues systematically shape these cognitive attitudes, revealing both general response patterns and nuanced model-specific variations. Our findings enhance the interpretability of LLMs by revealing implicit socio-cognitive biases and their responsiveness to social feedback, thus guiding the development of more transparent and socially responsible language technologies.
Problem

Research questions and friction points this paper is trying to address.

Examines how LLMs form and express socio-cognitive worldviews implicitly
Investigates unexplored dimensions like authority, equality, autonomy, and fate in LLMs
Assesses impact of social cues on LLMs' cognitive attitudes and biases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing Social Worldview Taxonomy (SWT) framework
Measuring four canonical worldviews in LLMs
Testing social cues' impact on cognitive attitudes