Learning to Substitute Words with Model-based Score Ranking

📅 2025-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor generalizability of intelligent lexical substitution methods caused by reliance on subjective human annotations. We propose the first unsupervised lexical substitution learning framework, abandoning manual annotation in favor of model-based quality scores—such as BARTScore—as surrogate supervision signals. Our approach constructs a word replacement distribution and statistical significance criterion grounded in these scores, and introduces a quality-score alignment loss for end-to-end self-supervised optimization. The key innovation lies in unifying model-based quality evaluation, statistical significance testing, and sequence-level substitution modeling into a single unsupervised training objective. Experiments demonstrate that our method significantly outperforms strong baselines—including BERT, BART, GPT-4, and LLaMA—across multiple benchmarks, substantially improving post-substitution sentence quality. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Smart word substitution aims to enhance sentence quality by improving word choices; however current benchmarks rely on human-labeled data. Since word choices are inherently subjective, ground-truth word substitutions generated by a small group of annotators are often incomplete and likely not generalizable. To circumvent this issue, we instead employ a model-based score (BARTScore) to quantify sentence quality, thus forgoing the need for human annotations. Specifically, we use this score to define a distribution for each word substitution, allowing one to test whether a substitution is statistically superior relative to others. In addition, we propose a loss function that directly optimizes the alignment between model predictions and sentence scores, while also enhancing the overall quality score of a substitution. Crucially, model learning no longer requires human labels, thus avoiding the cost of annotation while maintaining the quality of the text modified with substitutions. Experimental results show that the proposed approach outperforms both masked language models (BERT, BART) and large language models (GPT-4, LLaMA). The source code is available at https://github.com/Hyfred/Substitute-Words-with-Ranking.
Problem

Research questions and friction points this paper is trying to address.

Enhances sentence quality via smart word substitution
Eliminates reliance on human-labeled data for substitutions
Optimizes model predictions to improve substitution quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-based score ranking
No human annotations required
Optimized loss function alignment
🔎 Similar Papers
No similar papers found.