Enhancing Non-English Capabilities of English-Centric Large Language Models through Deep Supervision Fine-Tuning

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the performance degradation of large language models (LLMs) on non-English tasks—caused by English-centric training data and insufficient cross-lingual alignment supervision—this paper proposes Deeply Supervised Fine-Tuning (DFT). DFT introduces explicit cross-lingual translation supervision at lower layers and English reasoning supervision at middle layers, coupled with a novel hierarchical, dual-type loss design operating at both logits and feature levels. This breaks the implicit English-centric “pivot” mechanism prevalent in standard fine-tuning. Evaluated on LLaMA-2 and Gemma-2 architectures, DFT achieves significant gains on multilingual benchmarks, substantially improving non-English task performance while rigorously preserving English capability—no regression observed. The core contribution is the first method to jointly and explicitly guide both cross-lingual alignment of intermediate representations and reasoning pathways in LLMs.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated significant progress in multilingual language understanding and generation. However, due to the imbalance in training data, their capabilities in non-English languages are limited. Recent studies revealed the English-pivot multilingual mechanism of LLMs, where LLMs implicitly convert non-English queries into English ones at the bottom layers and adopt English for thinking at the middle layers. However, due to the absence of explicit supervision for cross-lingual alignment in the intermediate layers of LLMs, the internal representations during these stages may become inaccurate. In this work, we introduce a deep supervision fine-tuning method (DFT) that incorporates additional supervision in the internal layers of the model to guide its workflow. Specifically, we introduce two training objectives on different layers of LLMs: one at the bottom layers to constrain the conversion of the target language into English, and another at the middle layers to constrain reasoning in English. To effectively achieve the guiding purpose, we designed two types of supervision signals: logits and feature, which represent a stricter constraint and a relatively more relaxed guidance. Our method guides the model to not only consider the final generated result when processing non-English inputs but also ensure the accuracy of internal representations. We conducted extensive experiments on typical English-centric large models, LLaMA-2 and Gemma-2, and the results on multiple multilingual datasets show that our method significantly outperforms traditional fine-tuning methods.
Problem

Research questions and friction points this paper is trying to address.

Improves non-English capabilities of English-centric LLMs
Addresses inaccuracies in cross-lingual internal representations
Introduces deep supervision for better multilingual alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep supervision fine-tuning for multilingual LLMs
Two training objectives for language conversion and reasoning
Logits and feature supervision for accurate internal representations
🔎 Similar Papers
No similar papers found.