🤖 AI Summary
Large language models (LLMs) still struggle with discourse-level phenomena—such as pronoun resolution and lexical cohesion—in document-level machine translation. This paper presents the first empirical evidence that LLMs implicitly encode discourse-aware translation knowledge in their internal representations. Building on this finding, we propose Quality-Aware Decoding (QAD), a novel decoding strategy that dynamically activates and leverages this implicit knowledge via context-sensitive analysis and alignment with human preferences. Compared to standard decoding methods, QAD significantly improves semantic richness, discourse coherence, and semantic fidelity of translations. It consistently outperforms mainstream baselines across multiple automatic and human evaluations, achieving translation quality closer to professional human output. Our work establishes a new paradigm for uncovering and harnessing latent discourse capabilities in LLMs, advancing the frontier of high-quality document-level translation.
📝 Abstract
Large language models (LLMs) have emerged as strong contenders in machine translation.Yet, they still struggle to adequately handle discourse phenomena, such as pronoun resolution and lexical cohesion at the document level. In this study, we thoroughly investigate the discourse phenomena performance of LLMs in context-aware translation. We demonstrate that discourse knowledge is encoded within LLMs and propose the use of quality-aware decoding (QAD) to effectively extract this knowledge, showcasing its superiority over other decoding approaches through comprehensive analysis. Furthermore, we illustrate that QAD enhances the semantic richness of translations and aligns them more closely with human preferences.