CC-Tuning: A Cross-Lingual Connection Mechanism for Improving Joint Multilingual Supervised Fine-Tuning

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address multilingual capability imbalance in large language models (LLMs) caused by English-dominant pretraining, this work proposes a joint fine-tuning paradigm that explicitly models deep cross-lingual interactions in the latent space. Methodologically, it introduces (1) a novel cross-layer cross-lingual connection mechanism, wherein a learnable decision module dynamically fuses bilingual feed-forward activations, and (2) a cross-lingual representation transformation matrix enabling implicit knowledge transfer during monolingual inference. Evaluated systematically across six standard multilingual benchmarks spanning 22 languages, the approach consistently outperforms supervised fine-tuning (SFT) and data-augmentation baselines. Crucially, it provides the first empirical validation that explicit latent-space cross-lingual interaction significantly enhances both multilingual performance and generalization—demonstrating effectiveness across diverse language families and task types.

Technology Category

Application Category

📝 Abstract
Current large language models (LLMs) often exhibit imbalanced multilingual capabilities due to their English-centric training corpora. To address this, existing fine-tuning approaches operating at the data-level (e.g., through data augmentation or distillation) typically introduce implicit cross-lingual alignment, overlooking the potential for more profound, latent-level cross-lingual interactions. In this work, we propose CC-Tuning, a novel multilingual fine-tuning paradigm that explicitly establishes a cross-lingual connection mechanism at the latent level. During training, CC-Tuning fuses the feed forward activations from both English and non-English inputs, enabling the model to benefit from both linguistic resources. This process is facilitated with a trainable Decision Maker that identifies beneficial activations. Furthermore, during inference, a Transform Matrix is utilized to simulate the cross-lingual connection under monolingual setting through representation transformation. Our experiments on six benchmarks covering 22 languages show that CC-Tuning outperforms vanilla SFT and offers a strong latent-level alternative to data-level augmentation methods. Further analysis also highlights the practicality of CC-Tuning and the potential of latent-level cross-lingual interactions in advancing the multilingual performance of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Addresses imbalanced multilingual capabilities in English-centric LLMs
Introduces latent-level cross-lingual interactions for deeper alignment
Enhances multilingual performance via activation fusion and representation transformation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent-level cross-lingual connection mechanism
Trainable Decision Maker for activation fusion
Transform Matrix for monolingual representation transformation
🔎 Similar Papers
No similar papers found.