🤖 AI Summary
This paper identifies a critical limitation of BERTScore in financial text semantic similarity evaluation: its insensitivity to numerical changes, rendering it unable to distinguish semantically divergent financial expressions such as “2% gain” versus “20% loss.” To address this, we introduce FinNuE—the first diagnostic dataset designed specifically for evaluating numerical semantic sensitivity in finance—covering diverse sources including earnings call transcripts, regulatory filings, news articles, and social media posts, with systematically controlled numerical perturbations. Empirical results demonstrate that BERTScore frequently assigns spuriously high similarity scores to financially distinct sentence pairs whose embeddings remain close despite numerically opposing values, exposing substantial reliability risks in financial applications. Our work not only reveals a fundamental deficiency of mainstream embedding-based metrics in modeling numerical semantics but also provides a reproducible benchmark to advance the development of numerically aware evaluation frameworks.
📝 Abstract
BERTScore has become a widely adopted metric for evaluating semantic similarity between natural language sentences. However, we identify a critical limitation: BERTScore exhibits low sensitivity to numerical variation, a significant weakness in finance where numerical precision directly affects meaning (e.g., distinguishing a 2% gain from a 20% loss). We introduce FinNuE, a diagnostic dataset constructed with controlled numerical perturbations across earnings calls, regulatory filings, social media, and news articles. Using FinNuE, demonstrate that BERTScore fails to distinguish semantically critical numerical differences, often assigning high similarity scores to financially divergent text pairs. Our findings reveal fundamental limitations of embedding-based metrics for finance and motivate numerically-aware evaluation frameworks for financial NLP.