Revisiting Word Embeddings in the LLM Era

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether the superior performance of large language model (LLM)-derived word/sentence embeddings stems from scale alone or reflects fundamental representational differences. We systematically compare layer-averaged and [CLS] embeddings from prominent LLMs (Llama, ChatGLM) against classical static and contextualized models—including Word2Vec, GloVe, SBERT, USE, and SimCSE—on both non-contextual tasks (WS353, SimLex-999, Google Analogy) and contextual sentence similarity benchmarks. Results show that LLM word embeddings significantly outperform static embeddings in lexical semantic clustering and analogy reasoning; however, SimCSE remains superior on sentence similarity, achieving an average +4.2% Spearman correlation. Crucially, we identify for the first time that LLM word embeddings exhibit markedly higher context-free semantic compactness—challenging the “universal embedding” assumption for LLMs—and provide empirical guidance for embedding model selection and design.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have recently shown remarkable advancement in various NLP tasks. As such, a popular trend has emerged lately where NLP researchers extract word/sentence/document embeddings from these large decoder-only models and use them for various inference tasks with promising results. However, it is still unclear whether the performance improvement of LLM-induced embeddings is merely because of scale or whether underlying embeddings they produce significantly differ from classical encoding models like Word2Vec, GloVe, Sentence-BERT (SBERT) or Universal Sentence Encoder (USE). This is the central question we investigate in the paper by systematically comparing classical decontextualized and contextualized word embeddings with the same for LLM-induced embeddings. Our results show that LLMs cluster semantically related words more tightly and perform better on analogy tasks in decontextualized settings. However, in contextualized settings, classical models like SimCSE often outperform LLMs in sentence-level similarity assessment tasks, highlighting their continued relevance for fine-grained semantics.
Problem

Research questions and friction points this paper is trying to address.

Compare LLM-induced embeddings with classical models
Assess performance in decontextualized and contextualized settings
Evaluate semantic clustering and analogy task efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares LLM-induced embeddings with classical models
LLMs excel in decontextualized word clustering
Classical models outperform in contextualized sentence similarity
🔎 Similar Papers
No similar papers found.