🤖 AI Summary
Supervised fine-tuning (SFT) suffers from computational redundancy when domain-specific data substantially overlaps with the large language model’s (LLM’s) pre-existing knowledge. Method: This paper proposes a human-inspired self-learning framework that enables the LLM to perform a zero-shot “self-answer–self-evaluate–self-filter” loop, automatically identifying and removing question-answer pairs already mastered from the SFT dataset; only the remaining unknown-knowledge subset undergoes lightweight SFT. Contribution/Results: To our knowledge, this is the first work to integrate human-like metacognitive mechanisms into efficient LLM adaptation—requiring neither external annotations nor auxiliary discriminative models. Experiments in agriculture and healthcare domains demonstrate 47%–63% reduction in training time while matching the performance of full-dataset SFT, significantly improving both computational efficiency and precision of knowledge utilization.
📝 Abstract
When using supervised fine-tuning (SFT) to adapt large language models (LLMs) to specific domains, a significant challenge arises: should we use the entire SFT dataset for fine-tuning? Common practice often involves fine-tuning directly on the entire dataset due to limited information on the LLM's past training data. However, if the SFT dataset largely overlaps with the model's existing knowledge, the performance gains are minimal, leading to wasted computational resources. Identifying the unknown knowledge within the SFT dataset and using it to fine-tune the model could substantially improve the training efficiency. To address this challenge, we propose a self-learning framework for LLMs inspired by human learning pattern. This framework takes a fine-tuning (SFT) dataset in a specific domain as input. First, the LLMs answer the questions in the SFT dataset. The LLMs then objectively grade the responses and filter out the incorrectly answered QA pairs. Finally, we fine-tune the LLMs based on this filtered QA set. Experimental results in the fields of agriculture and medicine demonstrate that our method substantially reduces training time while achieving comparable improvements to those attained with full dataset fine-tuning. By concentrating on the unknown knowledge within the SFT dataset, our approach enhances the efficiency of fine-tuning LLMs.