SlangDIT: Benchmarking LLMs in Interpretative Slang Translation

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two key challenges in slang translation by large language models (LLMs): difficulty capturing semantic extension and insufficient interpretability due to fragmented task design. To this end, we propose the novel paradigm of *explanatory slang translation*, introducing SlangDIT—the first English–Chinese bilingual slang benchmark (25K sentence pairs) supporting integrated detection, cross-lingual paraphrasing, and contextualized translation. Methodologically, we design SlangOWL, a deep-reasoning model that explicitly encodes causal reasoning paths for slang sense identification, context-specific paraphrasing, and translation decision-making, integrating chain-of-thought prompting, semantic disambiguation, and cross-lingual paraphrase guidance. Experiments demonstrate that SlangOWL significantly outperforms baselines on SlangDIT (+12.7 BLEU, +23.4% paraphrase accuracy), validating substantial improvements in both translation robustness and model interpretability.

Technology Category

Application Category

📝 Abstract
The challenge of slang translation lies in capturing context-dependent semantic extensions, as slang terms often convey meanings beyond their literal interpretation. While slang detection, explanation, and translation have been studied as isolated tasks in the era of large language models (LLMs), their intrinsic interdependence remains underexplored. The main reason is lacking of a benchmark where the two tasks can be a prerequisite for the third one, which can facilitate idiomatic translation. In this paper, we introduce the interpretative slang translation task (named SlangDIT) consisting of three sub-tasks: slang detection, cross-lingual slang explanation, and slang translation within the current context, aiming to generate more accurate translation with the help of slang detection and slang explanation. To this end, we construct a SlangDIT dataset, containing over 25k English-Chinese sentence pairs. Each source sentence mentions at least one slang term and is labeled with corresponding cross-lingual slang explanation. Based on the benchmark, we propose a deep thinking model, named SlangOWL. It firstly identifies whether the sentence contains a slang, and then judges whether the slang is polysemous and analyze its possible meaning. Further, the SlangOWL provides the best explanation of the slang term targeting on the current context. Finally, according to the whole thought, the SlangOWL offers a suitable translation. Our experiments on LLMs (emph{e.g.}, Qwen2.5 and LLama-3.1), show that our deep thinking approach indeed enhances the performance of LLMs where the proposed SLangOWL significantly surpasses the vanilla models and supervised fine-tuned models without thinking.
Problem

Research questions and friction points this paper is trying to address.

Addressing context-dependent slang translation challenges
Exploring interdependence of slang detection, explanation, translation
Enhancing LLM performance in interpretative slang translation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces SlangDIT dataset for slang translation
Proposes SlangOWL model with deep thinking steps
Combines detection, explanation, and translation tasks
🔎 Similar Papers
No similar papers found.