🤖 AI Summary
This study challenges the implicit assumption that geometric proximity (e.g., cosine similarity) in sentence embedding spaces reflects semantic or functional similarity, asking whether such geometric properties can predict relative performance on downstream language tasks.
Method: Within a unified Transformer framework, we systematically compare three embedding strategies—mean-pooled token, [CLS] token, and randomly selected token embeddings—across multiple NLP tasks. We conduct rigorous distance–performance correlation analysis to assess how well cosine similarity predicts task accuracy.
Contribution/Results: We find that cosine similarity captures only shallow, surface-level lexical commonalities and fails to reliably predict downstream performance. Crucially, task-relevant semantic similarity is encoded via dimensionally weighted combinations rather than isotropic geometric proximity; thus, embeddings with large geometric distances in high-dimensional space may still encode highly similar task-specific semantics. This work provides the first empirical evidence of a substantial decoupling between the geometric structure of sentence embeddings and their functional utility, establishing a new paradigm for embedding evaluation and design.
📝 Abstract
Transformer models learn to encode and decode an input text, and produce contextual token embeddings as a side-effect. The mapping from language into the embedding space maps words expressing similar concepts onto points that are close in the space. In practice, the reverse implication is also assumed: words corresponding to close points in this space are similar or related, those that are further are not.
Does closeness in the embedding space extend to shared properties for sentence embeddings? We present an investigation of sentence embeddings and show that the geometry of their embedding space is not predictive of their relative performances on a variety of tasks.
We compute sentence embeddings in three ways: as averaged token embeddings, as the embedding of the special [CLS] token, and as the embedding of a random token from the sentence. We explore whether there is a correlation between the distance between sentence embedding variations and their performance on linguistic tasks, and whether despite their distances, they do encode the same information in the same manner.
The results show that the cosine similarity -- which treats dimensions shallowly -- captures (shallow) commonalities or differences between sentence embeddings, which are not predictive of their performance on specific tasks. Linguistic information is rather encoded in weighted combinations of different dimensions, which are not reflected in the geometry of the sentence embedding space.