Revisiting Word Embeddings in the LLM Era

📅 Unknown Date
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether the superior performance of large language model (LLM)-generated word/sentence embeddings stems solely from scale or reflects fundamentally distinct semantic representation mechanisms compared to traditional encoders (e.g., Word2Vec, GloVe, SBERT, USE). We conduct a systematic comparative evaluation across decontextualized (word analogy) and contextualized (sentence similarity) tasks, employing K-means clustering, Google Analogy, STS benchmarks, and t-SNE visualization. Our key finding—novel in the literature—is that LLM embeddings significantly outperform traditional methods on analogy tasks (up to +12.7% accuracy) and yield tighter semantic clusters; however, on fine-grained sentence similarity estimation, classic approaches such as SimCSE remain consistently superior by an average margin of 8.3 percentage points. These results challenge the implicit “scale-is-all” assumption and demonstrate that traditional embedding methods retain irreplaceable advantages for specific semantic modeling tasks.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have recently shown remarkable advancement in various NLP tasks. As such, a popular trend has emerged lately where NLP researchers extract word/sentence/document embeddings from these large decoder-only models and use them for various inference tasks with promising results. However, it is still unclear whether the performance improvement of LLM-induced embeddings is merely because of scale or whether underlying embeddings they produce significantly differ from classical encoding models like Word2Vec, GloVe, Sentence-BERT (SBERT) or Universal Sentence Encoder (USE). This is the central question we investigate in the paper by systematically comparing classical decontextualized and contextualized word embeddings with the same for LLM-induced embeddings. Our results show that LLMs cluster semantically related words more tightly and perform better on analogy tasks in decontextualized settings. However, in contextualized settings, classical models like SimCSE often outperform LLMs in sentence-level similarity assessment tasks, highlighting their continued relevance for fine-grained semantics.
Problem

Research questions and friction points this paper is trying to address.

Compare LLM-induced embeddings with classical encoding models.
Assess performance differences in decontextualized and contextualized settings.
Evaluate semantic clustering and analogy task performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares LLM-induced embeddings with classical models
Analyzes semantic clustering in decontextualized settings
Evaluates sentence-level similarity in contextualized settings
🔎 Similar Papers
No similar papers found.
M
Matthew Freestone
BDI Lab, Auburn University, Alabama, USA
S
Shubhra Kanti Karmaker Santu
BDI Lab, Auburn University, Alabama, USA