🤖 AI Summary
This study systematically evaluates large language models (LLMs) on lexical definition generation, benchmarking their accuracy and consistency against authoritative dictionaries (e.g., OED, COED, LDOCE) and classical static word embeddings (GloVe, FastText). Using ChatGPT-series models to generate definitions for high- and low-frequency words, we employ human evaluation and surface-form similarity analysis. Results show that LLMs achieve overall definition accuracy comparable to authoritative dictionaries; significantly outperform static embeddings on low-frequency words, demonstrating superior robustness and generalization; and exhibit lower inter-model definition similarity than inter-dictionary similarity—indicating greater standardization in lexicographic definitions. To our knowledge, this is the first empirical validation of LLMs as viable dynamic lexical resources, establishing a novel interdisciplinary paradigm bridging lexicography and NLP.
📝 Abstract
Dictionary definitions are historically the arbitrator of what words mean, but this primacy has come under threat by recent progress in NLP, including word embeddings and generative models like ChatGPT. We present an exploratory study of the degree of alignment between word definitions from classical dictionaries and these newer computational artifacts. Specifically, we compare definitions from three published dictionaries to those generated from variants of ChatGPT. We show that (i) definitions from different traditional dictionaries exhibit more surface form similarity than do model-generated definitions, (ii) that the ChatGPT definitions are highly accurate, comparable to traditional dictionaries, and (iii) ChatGPT-based embedding definitions retain their accuracy even on low frequency words, much better than GloVE and FastText word embeddings.