🤖 AI Summary
This study investigates the extent to which the internal lexicons of large language models (LLMs) reflect human lexical association patterns, with a particular focus on how model scale and temperature influence the trade-off between typicality and diversity. By comparing human association norms from the Small World of Words (SWOW) dataset against generations from Mistral-7B, Llama-3.1-8B, and Qwen-2.5-32B across varying temperatures—and integrating psycholinguistic metrics such as word frequency and concreteness—the authors systematically analyze variability and typicality in model responses. The work reveals that larger models (e.g., Qwen) produce highly typical yet low-variability associations, whereas smaller models exhibit greater diversity at the cost of typicality. Increasing temperature consistently enhances diversity but reduces typicality. While all models replicate human trends in frequency and concreteness, they diverge significantly in the distributional properties of their associative outputs.
📝 Abstract
Large language models (LLMs) achieve impressive results in terms of fluency in text generation, yet the nature of their linguistic knowledge - in particular the human-likeness of their internal lexicon - remains uncertain. This study compares human and LLM-generated word associations to evaluate how accurately models capture human lexical patterns. Using English cue-response pairs from the SWOW dataset and newly generated associations from three LLMs (Mistral-7B, Llama-3.1-8B, and Qwen-2.5-32B) across multiple temperature settings, we examine (i) the influence of lexical factors such as word frequency and concreteness on cue-response pairs, and (ii) the variability and typicality of LLM responses compared to human responses. Results show that all models mirror human trends for frequency and concreteness but differ in response variability and typicality. Larger models such as Qwen tend to emulate a single "prototypical" human participant, generating highly typical but minimally variable responses, while smaller models such as Mistral and Llama produce more variable yet less typical responses. Temperature settings further influence this trade-off, with higher values increasing variability but decreasing typicality. These findings highlight both the similarities and differences between human and LLM lexicons, emphasizing the need to account for model size and temperature when probing LLM lexical representations.