Evaluating LLM Alignment With Human Trust Models

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited understanding of how large language models (LLMs) internally represent human trust mechanisms. It presents the first systematic alignment between classical theories of human trust and LLM internal representations through white-box analysis. Using contrastive prompting, the authors generate trust-related concept embeddings in the activation space of EleutherAI/gpt-j-6B and map them onto five prominent computational trust models via cosine similarity. The results demonstrate that gpt-j-6B’s internal representations align most closely with Castelfranchi’s socio-cognitive model of trust, followed by the Marsh model, thereby confirming that LLMs can encode complex socio-cognitive structures. This finding opens new avenues for research in trustworthy AI and computational modeling of social cognition.

Technology Category

Application Category

📝 Abstract
Trust plays a pivotal role in enabling effective cooperation, reducing uncertainty, and guiding decision-making in both human interactions and multi-agent systems. Although it is significant, there is limited understanding of how large language models (LLMs) internally conceptualize and reason about trust. This work presents a white-box analysis of trust representation in EleutherAI/gpt-j-6B, using contrastive prompting to generate embedding vectors within the activation space of the LLM for diadic trust and related interpersonal relationship attributes. We first identified trust-related concepts from five established human trust models. We then determined a threshold for significant conceptual alignment by computing pairwise cosine similarities across 60 general emotional concepts. Then we measured the cosine similarities between the LLM's internal representation of trust and the derived trust-related concepts. Our results show that the internal trust representation of EleutherAI/gpt-j-6B aligns most closely with the Castelfranchi socio-cognitive model, followed by the Marsh Model. These findings indicate that LLMs encode socio-cognitive constructs in their activation space in ways that support meaningful comparative analyses, inform theories of social cognition, and support the design of human-AI collaborative systems.
Problem

Research questions and friction points this paper is trying to address.

trust
large language models
human trust models
alignment
social cognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

trust representation
contrastive prompting
large language models
socio-cognitive modeling
activation space analysis
🔎 Similar Papers
No similar papers found.