Evaluating Contextualized Representations of (Spanish) Ambiguous Words: A New Lexical Resource and Empirical Analysis

📅 2024-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of evaluation resources for Spanish lexical ambiguity resolution by introducing SpanAmbig—the first publicly available, human-annotated minimal-pair sentence benchmark for ambiguous Spanish nouns, grounded in semantic relatedness judgments. We systematically evaluate contextualized word representations from monolingual and multilingual BERT-family models. Methodologically, we integrate preregistered behavioral experiments, inter-layer representational similarity analyses (CKA/RSA), and rigorously controlled minimal-pair sentence design. Key contributions include: (1) releasing the first high-quality, open-source Spanish ambiguity evaluation dataset; (2) uncovering reproducible, layer-wise developmental patterns in ambiguity resolution capability—patterns that transfer across languages; and (3) confirming a positive correlation between model scale and performance, while revealing a persistent, significant divergence between model representations and human semantic judgments. Overall, results expose a fundamental limitation of current contextualized representations in fine-grained semantic discrimination.

Technology Category

Application Category

📝 Abstract
Lexical ambiguity -- where a single wordform takes on distinct, context-dependent meanings -- serves as a useful tool to compare across different language models' (LMs') ability to form distinct, contextualized representations of the same stimulus. Few studies have systematically compared LMs' contextualized word embeddings for languages beyond English. Here, we evaluate semantic representations of Spanish ambiguous nouns in context in a suite of Spanish-language monolingual and multilingual BERT-based models. We develop a novel dataset of minimal-pair sentences evoking the same or different sense for a target ambiguous noun. In a pre-registered study, we collect contextualized human relatedness judgments for each sentence pair. We find that various BERT-based LMs' contextualized semantic representations capture some variance in human judgments but fall short of the human benchmark. In exploratory work, we find that performance scales with model size. We also identify stereotyped trajectories of target noun disambiguation as a proportion of traversal through a given LM family's architecture, which we partially replicate in English. We contribute (1) a dataset of controlled, Spanish sentence stimuli with human relatedness norms, and (2) to our evolving understanding of the impact that LM specification (architectures, training protocols) exerts on contextualized embeddings.
Problem

Research questions and friction points this paper is trying to address.

Evaluate contextualized representations of Spanish ambiguous words.
Compare language models' ability to disambiguate word meanings.
Develop a dataset for assessing semantic representations in Spanish.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed Spanish ambiguous noun dataset
Evaluated BERT models' contextualized embeddings
Analyzed model size impact on performance
🔎 Similar Papers
No similar papers found.
P
Pamela D. Rivière
Department of Cognitive Science, UC San Diego
A
Anne L. Beatty-Martínez
Department of Cognitive Science, UC San Diego
Sean Trott
Sean Trott
Assistant Teaching Professor, UC San Diego
cognitive sciencepragmatic inferenceambiguitylarge language models