Uncovering the Computational Ingredients of Human-Like Representations in LLMs

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current evaluation benchmarks poorly capture how well large language models (LLMs) align with human cognitive representations, limiting our understanding of their psychological plausibility. Method: We systematically assessed over 70 LLMs on a human-behavioral triadic similarity task derived from the THINGS conceptual database—a canonical paradigm in cognitive science—quantifying representational alignment with human judgments. Contribution/Results: Instruction tuning and high-dimensional attention heads significantly improve behavioral alignment; in contrast, multimodal pretraining and parameter count exhibit minimal impact. Crucially, standard benchmarks (e.g., MMLU, BIG-bench) show weak correlation with human representational similarity, rendering them inadequate proxies for cognitive alignment. This work pioneers the large-scale integration of classical cognitive paradigms into LLM representation analysis, revealing dissociable effects of architectural and training factors on cognitive fidelity. It exposes a fundamental limitation in prevailing evaluation frameworks for assessing human–machine representational consistency and provides both theoretical grounding and empirical guidance for developing cognitively grounded LLMs.

Technology Category

Application Category

📝 Abstract
The ability to translate diverse patterns of inputs into structured patterns of behavior has been thought to rest on both humans'and machines'ability to learn robust representations of relevant concepts. The rapid advancement of transformer-based large language models (LLMs) has led to a diversity of computational ingredients -- architectures, fine tuning methods, and training datasets among others -- but it remains unclear which of these ingredients are most crucial for building models that develop human-like representations. Further, most current LLM benchmarks are not suited to measuring representational alignment between humans and models, making benchmark scores unreliable for assessing if current LLMs are making progress towards becoming useful cognitive models. We address these limitations by first evaluating a set of over 70 models that widely vary in their computational ingredients on a triplet similarity task, a method well established in the cognitive sciences for measuring human conceptual representations, using concepts from the THINGS database. Comparing human and model representations, we find that models that undergo instruction-finetuning and which have larger dimensionality of attention heads are among the most human aligned, while multimodal pretraining and parameter size have limited bearing on alignment. Correlations between alignment scores and scores on existing benchmarks reveal that while some benchmarks (e.g., MMLU) are better suited than others (e.g., MUSR) for capturing representational alignment, no existing benchmark is capable of fully accounting for the variance of alignment scores, demonstrating their insufficiency in capturing human-AI alignment. Taken together, our findings help highlight the computational ingredients most essential for advancing LLMs towards models of human conceptual representation and address a key benchmarking gap in LLM evaluation.
Problem

Research questions and friction points this paper is trying to address.

Identifying key computational ingredients for human-like representations in LLMs
Evaluating representational alignment between humans and language models
Assessing limitations of existing benchmarks in measuring human-AI alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating models with triplet similarity task
Using instruction-finetuning for human alignment
Larger attention head dimensionality improves alignment
🔎 Similar Papers
No similar papers found.