On Self-improving Token Embeddings

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the out-of-vocabulary (OOV) problem and insufficient semantic representation of static word embeddings in domain-specific corpora, this paper proposes a lightweight, model-free, non-parametric iterative embedding optimization method. The method relies solely on co-occurrence statistics and context-aware neighborhood-weighted aggregation—requiring no large language models, neural networks, or gradient-based optimization—and supports zero-initialized incremental updates, inherently mitigating OOV issues while enabling semantic evolution modeling. Its core contribution is the first fully deep-learning-free self-improving embedding mechanism. Experiments on the NOAA storm event corpus demonstrate a 27% improvement in analogy and clustering performance for storm-related terms, significantly enhancing concept retrieval, event impact attribution, and polysemy disambiguation capabilities.

Technology Category

Application Category

📝 Abstract
This article introduces a novel and fast method for refining pre-trained static word or, more generally, token embeddings. By incorporating the embeddings of neighboring tokens in text corpora, it continuously updates the representation of each token, including those without pre-assigned embeddings. This approach effectively addresses the out-of-vocabulary problem, too. Operating independently of large language models and shallow neural networks, it enables versatile applications such as corpus exploration, conceptual search, and word sense disambiguation. The method is designed to enhance token representations within topically homogeneous corpora, where the vocabulary is restricted to a specific domain, resulting in more meaningful embeddings compared to general-purpose pre-trained vectors. As an example, the methodology is applied to explore storm events and their impacts on infrastructure and communities using narratives from a subset of the NOAA Storm Events database. The article also demonstrates how the approach improves the representation of storm-related terms over time, providing valuable insights into the evolving nature of disaster narratives.
Problem

Research questions and friction points this paper is trying to address.

Refining pre-trained static token embeddings efficiently
Solving out-of-vocabulary issues in domain-specific corpora
Enhancing embeddings for corpus exploration and disambiguation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic token embeddings via neighbor incorporation
Domain-specific vocabulary enhancement method
Independent of large language models
🔎 Similar Papers
No similar papers found.
M
Mario M. Kubek
Georgia State University, Atlanta, GA, USA
S
Shiraj Pokharel
Georgia State University, Atlanta, GA, USA
T
Thomas Bohme
Technische Universität Ilmenau, Ilmenau, Germany
E
E. L. McDaniel
Georgia State University, Atlanta, GA, USA
Herwig Unger
Herwig Unger
FernUniverität in Hagen
Big Data algorithms und –technologiesWeb based systems and applicationsInternet search and information retrievalP2P syst
Armin R. Mikler
Armin R. Mikler
Professor and Chair, Department of Computer Science, Georgia State University
Intelligent AgentsPublic Health InformaticsContagion ModelingComputational EpidemiologyComputational Response Planning