🤖 AI Summary
This work proposes a fine-grained paradigm for modeling semantic equivalence by decomposing paraphrasing into specific linguistic operations—such as lexical substitution and syntactic transformation—to construct structured, cognitively plausible representations of meaning. Rather than relying on coarse binary classification or single rewrites, the approach explicitly captures the nuanced mechanisms underlying paraphrase generation. A deep semantic model trained on annotated data achieves 89.6% accuracy on a Wikipedia plagiarism detection task and 66.5% on an arXiv dataset, substantially outperforming human baselines. Furthermore, the method demonstrates consistent performance gains on downstream tasks such as Quora duplicate question detection, validating its effectiveness in both paraphrase understanding and controllable generation.
📝 Abstract
Language enables humans to share knowledge, reason about the world, and pass on strategies for survival and innovation across generations. At the heart of this process is not just the ability to communicate but also the remarkable flexibility in how we can express ourselves. We can express the same thoughts in virtually infinite ways using different words and structures - this ability to rephrase and reformulate expressions is known as paraphrase. Modeling paraphrases is a keystone to meaning in computational language models; being able to construct different variations of texts that convey the same meaning or not shows strong abilities of semantic understanding. If computational language models are to represent meaning, they must understand and control the different aspects that construct the same meaning as opposed to different meanings at a fine granularity. Yet most existing approaches reduce paraphrasing to a binary decision between two texts or to producing a single rewrite of a source, obscuring which linguistic factors are responsible for meaning preservation. In this thesis, I propose that decomposing paraphrases into their constituent linguistic aspects (paraphrase types) offers a more fine-grained and cognitively grounded view of semantic equivalence. I show that even advanced machine learning models struggle with this task. Yet, when explicitly trained on paraphrase types, models achieve stronger performance on related paraphrase tasks and downstream applications. For example, in plagiarism detection, language models trained on paraphrase types surpass human baselines: 89.6% accuracy compared to 78.4% for plagiarism cases from Wikipedia, and 66.5% compared to 55.7% for plagiarism of scientific papers from arXiv. In identifying duplicate questions on Quora, models trained with paraphrase types improve over models trained on binary pairs. Furthermore, I demonstrate that...