🤖 AI Summary
This work investigates the degree of representational and behavioral alignment between generative language models and human semantic cognition in lexical similarity judgment. We introduce the first word-triple evaluation framework to systematically assess 32 open-source models—spanning diverse scales and training paradigms—using multi-layer Transformer representation extraction, cosine-similarity-based representational alignment analysis, and Spearman correlation for behavioral consistency evaluation. Key findings are: (1) Intermediate layers of small models (e.g., Phi-3, Gemma-2B) achieve human-level representational alignment; (2) Instruction fine-tuning markedly improves behavioral consistency (+28.6% on average) without enhancing representational alignment; (3) Only the largest model (Llama-3-70B) exhibits consistent trends in both representational and behavioral alignment; (4) Alignment patterns are highly architecture- and layer-dependent, exhibiting strong heterogeneity. Our study establishes a novel paradigm and empirical benchmark for evaluating cognitive alignment in language models.
📝 Abstract
Small and mid-sized generative language models have gained increasing attention. Their size and availability make them amenable to being analyzed at a behavioral as well as a representational level, allowing investigations of how these levels interact. We evaluate 32 publicly available language models for their representational and behavioral alignment with human similarity judgments on a word triplet task. This provides a novel evaluation setting to probe semantic associations in language beyond common pairwise comparisons. We find that (1) even the representations of small language models can achieve human-level alignment, (2) instruction-tuned model variants can exhibit substantially increased agreement, (3) the pattern of alignment across layers is highly model dependent, and (4) alignment based on models' behavioral responses is highly dependent on model size, matching their representational alignment only for the largest evaluated models.