🤖 AI Summary
This work addresses the lack of evaluation resources for Spanish lexical ambiguity resolution by introducing SpanAmbig—the first publicly available, human-annotated minimal-pair sentence benchmark for ambiguous Spanish nouns, grounded in semantic relatedness judgments. We systematically evaluate contextualized word representations from monolingual and multilingual BERT-family models. Methodologically, we integrate preregistered behavioral experiments, inter-layer representational similarity analyses (CKA/RSA), and rigorously controlled minimal-pair sentence design. Key contributions include: (1) releasing the first high-quality, open-source Spanish ambiguity evaluation dataset; (2) uncovering reproducible, layer-wise developmental patterns in ambiguity resolution capability—patterns that transfer across languages; and (3) confirming a positive correlation between model scale and performance, while revealing a persistent, significant divergence between model representations and human semantic judgments. Overall, results expose a fundamental limitation of current contextualized representations in fine-grained semantic discrimination.
📝 Abstract
Lexical ambiguity -- where a single wordform takes on distinct, context-dependent meanings -- serves as a useful tool to compare across different language models' (LMs') ability to form distinct, contextualized representations of the same stimulus. Few studies have systematically compared LMs' contextualized word embeddings for languages beyond English. Here, we evaluate semantic representations of Spanish ambiguous nouns in context in a suite of Spanish-language monolingual and multilingual BERT-based models. We develop a novel dataset of minimal-pair sentences evoking the same or different sense for a target ambiguous noun. In a pre-registered study, we collect contextualized human relatedness judgments for each sentence pair. We find that various BERT-based LMs' contextualized semantic representations capture some variance in human judgments but fall short of the human benchmark. In exploratory work, we find that performance scales with model size. We also identify stereotyped trajectories of target noun disambiguation as a proportion of traversal through a given LM family's architecture, which we partially replicate in English. We contribute (1) a dataset of controlled, Spanish sentence stimuli with human relatedness norms, and (2) to our evolving understanding of the impact that LM specification (architectures, training protocols) exerts on contextualized embeddings.