Consensus-Aligned Neuron Efficient Fine-Tuning Large Language Models for Multi-Domain Machine Translation

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of domain transfer, parameter interference, and limited generalization that commonly hinder large language models in multidomain machine translation. The authors propose a neuron-efficient fine-tuning framework that innovatively identifies and updates consensus-aligned neurons consistent with domain-specific features through mutual information maximization. By integrating mutual information–guided neuron selection with parameter-efficient fine-tuning (PEFT), the method effectively harmonizes general translation patterns with domain-specific knowledge. Evaluated on ten domains across German–English and Chinese–English translation tasks, the approach significantly outperforms existing PEFT methods, achieving state-of-the-art performance on both seen and unseen domains while mitigating parameter interference and domain overfitting.

Technology Category

Application Category

📝 Abstract
Multi-domain machine translation (MDMT) aims to build a unified model capable of translating content across diverse domains. Despite the impressive machine translation capabilities demonstrated by large language models (LLMs), domain adaptation still remains a challenge for LLMs. Existing MDMT methods such as in-context learning and parameter-efficient fine-tuning often suffer from domain shift, parameter interference and limited generalization. In this work, we propose a neuron-efficient fine-tuning framework for MDMT that identifies and updates consensus-aligned neurons within LLMs. These neurons are selected by maximizing the mutual information between neuron behavior and domain features, enabling LLMs to capture both generalizable translation patterns and domain-specific nuances. Our method then fine-tunes LLMs guided by these neurons, effectively mitigating parameter interference and domain-specific overfitting. Comprehensive experiments on three LLMs across ten German-English and Chinese-English translation domains evidence that our method consistently outperforms strong PEFT baselines on both seen and unseen domains, achieving state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

multi-domain machine translation
large language models
domain adaptation
parameter interference
domain shift
Innovation

Methods, ideas, or system contributions that make the work stand out.

neuron-efficient fine-tuning
consensus-aligned neurons
multi-domain machine translation
mutual information
parameter-efficient fine-tuning
🔎 Similar Papers
No similar papers found.
S
Shuting Jiang
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming, China
R
Ran Song
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming, China
Yuxin Huang
Yuxin Huang
Unknown affiliation
Y
Yan Xiang
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming, China
Yantuan Xian
Yantuan Xian
Kunming University of Science and Technology
machine learningnatural language processingtext mining
S
Shengxiang Gao
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming, China
Zhengtao Yu
Zhengtao Yu
Kunming University of Science and Technology