🤖 AI Summary
This work addresses the challenges posed by Chinese social media text—rich in emerging slang and highly stylized expressions—for machine translation, particularly due to the scarcity of parallel data and the difficulty of evaluating stylistic fidelity. To this end, we present CSM-MTBench, the first multilingual translation benchmark specifically designed for Chinese social media, covering five Chinese–foreign language pairs. It comprises two expert-annotated subsets: “Fun Posts,” focusing on neologisms and slang, and “Social Snippets,” emphasizing affective style. We further propose a unified evaluation paradigm that integrates term translation success rate, embedding similarity, and large language model–based judgment (LLM-as-a-judge). Experiments across more than 20 state-of-the-art models reveal significant disparities in both semantic accuracy and stylistic preservation, underscoring the necessity and utility of our benchmark.
📝 Abstract
The prevalence of rapidly evolving slang, neologisms, and highly stylized expressions in informal user-generated text, particularly on Chinese social media, poses significant challenges for Machine Translation (MT) benchmarking. Specifically, we identify two primary obstacles: (1) data scarcity, as high-quality parallel data requires bilingual annotators familiar with platform-specific slang, and stylistic cues in both languages; and (2) metric limitations, where traditional evaluators like COMET often fail to capture stylistic fidelity and nonstandard expressions. To bridge these gaps, we introduce CSM-MTBench, a benchmark covering five Chinese-foreign language directions and consisting of two expert-curated subsets: Fun Posts, featuring context-rich, slang- and neologism-heavy content, and Social Snippets, emphasizing concise, emotion- and style- driven expressions. Furthermore, we propose tailored evaluation approaches for each subset: measuring the translation success rate of slang and neologisms in Fun Posts, while assessing tone and style preservation in Social Snippets via a hybrid of embedding-based metrics and LLM-as-a-judge. Experiments on over 20 models reveal substantial variation in how current MT systems handle semantic fidelity and informal, social-media-specific stylistic cues. CSM-MTBench thus serves as a rigorous testbed for advancing MT systems capable of mastering real-world Chinese social media texts.