Natural Language Fine-Tuning

📅 2024-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient domain expertise of large language models (LLMs) in low-resource, specialized domains, this paper proposes Natural Language-driven Token-level fine-Tuning (NL-Tuning), a novel paradigm for fine-grained, token-level adaptation. Unlike conventional supervised fine-tuning (SFT), NL-Tuning requires no labeled data, human feedback, or scalar reward signals; instead, it directly maps natural language instructions to token-level outputs—eliminating the need for SFT and its warm-up phase. Its core innovation lies in leveraging the target model’s intrinsic semantic understanding, augmented by probability-driven salient token identification and natural language guidance. On GSM8K, using only 50 samples, NL-Tuning achieves a 219% accuracy improvement over SFT and reduces training time and GPU memory consumption by 78.27% and 92.24%, respectively, compared to ReFT. To our knowledge, NL-Tuning is the first method enabling zero-shot, language-directed, token-level controllable, and highly efficient domain adaptation.

Technology Category

Application Category

📝 Abstract
Large language model fine-tuning techniques typically depend on extensive labeled data, external guidance, and feedback, such as human alignment, scalar rewards, and demonstration. However, in practical application, the scarcity of specific knowledge poses unprecedented challenges to existing fine-tuning techniques. In this paper, focusing on fine-tuning tasks in specific domains with limited data, we introduce Natural Language Fine-Tuning (NLFT), which utilizes natural language for fine-tuning for the first time. By leveraging the strong language comprehension capability of the target LM, NLFT attaches the guidance of natural language to the token-level outputs. Then, saliency tokens are identified with calculated probabilities. Since linguistic information is effectively utilized in NLFT, our proposed method significantly reduces training costs. It markedly enhances training efficiency, comprehensively outperforming reinforcement fine-tuning algorithms in accuracy, time-saving, and resource conservation. Additionally, on the macro level, NLFT can be viewed as a token-level fine-grained optimization of SFT, thereby efficiently replacing the SFT process without the need for warm-up (as opposed to ReFT requiring multiple rounds of warm-up with SFT). Compared to SFT, NLFT does not increase the algorithmic complexity, maintaining O(n). Extensive experiments on the GSM8K dataset demonstrate that NLFT, with only 50 data instances, achieves an accuracy increase that exceeds SFT by 219%. Compared to ReFT, the time complexity and space complexity of NLFT are reduced by 78.27% and 92.24%, respectively. The superior technique of NLFT is paving the way for the deployment of various innovative LLM fine-tuning applications when resources are limited at network edges. Our code has been released at https://github.com/Julia-LiuJ/NLFT.
Problem

Research questions and friction points this paper is trying to address.

Domain-specific Language Modeling
Limited Data
Expertise Calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Natural Language Fine-Tuning
Efficiency
Resource-constrained Environment
🔎 Similar Papers
No similar papers found.
J
Jia Liu
Pazhou Laboratory, Guangzhou, China; Huazhong University of Science and Technology, Wuhan, China
Y
Yue Wang
Pazhou Laboratory, Guangzhou, China; South China University of Technology, Guangzhou, China
Zhiqi Lin
Zhiqi Lin
Bytedance Inc.
Distributed AI Systems for Large Models
M
Min Chen
Pazhou Laboratory, Guangzhou, China; South China University of Technology, Guangzhou, China
Yixue Hao
Yixue Hao
Highly Cited Researcher, Associate Professor, Huazhong University of Science and Technology
Cognitve ComptutingEdge ComptingHealthcare Big Data
Long Hu
Long Hu
Associate Professor of Computer Science, Huazhong University of Science and Technology
Edge ComputingBig DataAffective ComputingDeep Reinforcement Learning