KaFT: Knowledge-aware Fine-tuning for Boosting LLMs' Domain-specific Question-Answering Performance

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from degraded supervised fine-tuning (SFT) performance and increased hallucination in domain-specific question answering (QA), primarily due to conflicts between their internal prior knowledge and context-dependent knowledge embedded in training data. Method: We propose a knowledge-aware weighted fine-tuning framework. It introduces a novel dynamic reward mechanism that detects and quantifies query-diverse knowledge conflicts, transforming conflicting samples into discriminative training signals for conflict-driven sample weighting during SFT. To ensure optimization stability, we further incorporate multi-model consensus verification. Results: Extensive experiments across four mainstream LLMs demonstrate that our method significantly improves domain QA accuracy, consistently suppresses hallucination, and enhances model robustness and cross-scenario generalization capability.

Technology Category

Application Category

📝 Abstract
Supervised fine-tuning (SFT) is a common approach to improve the domain-specific question-answering (QA) performance of large language models (LLMs). However, recent literature reveals that due to the conflicts between LLMs' internal knowledge and the context knowledge of training data, vanilla SFT using the full QA training set is usually suboptimal. In this paper, we first design a query diversification strategy for robust conflict detection and then conduct a series of experiments to analyze the impact of knowledge conflict. We find that 1) training samples with varied conflicts contribute differently, where SFT on the data with large conflicts leads to catastrophic performance drops; 2) compared to directly filtering out the conflict data, appropriately applying the conflict data would be more beneficial. Motivated by this, we propose a simple-yet-effective Knowledge-aware Fine-tuning (namely KaFT) approach to effectively boost LLMs' performance. The core of KaFT is to adapt the training weight by assigning different rewards for different training samples according to conflict level. Extensive experiments show that KaFT brings consistent and significant improvements across four LLMs. More analyses prove that KaFT effectively improves the model generalization and alleviates the hallucination.
Problem

Research questions and friction points this paper is trying to address.

Detect and mitigate knowledge conflicts in LLM fine-tuning
Optimize training weights based on conflict levels
Enhance domain-specific QA performance and reduce hallucination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Query diversification for robust conflict detection
Adaptive training weight by conflict level
Improves generalization and reduces hallucination
🔎 Similar Papers
No similar papers found.
Qihuang Zhong
Qihuang Zhong
Wuhan University
Large Language ModelsNatural Language Processing
L
Liang Ding
The University of Sydney, Australia
X
Xiantao Cai
School of Computer Science, National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, China
J
Juhua Liu
School of Computer Science, National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, China
Bo Du
Bo Du
Department of Management, Griffith Business School
Sustainable TransportTravel BehaviourUrban Data AnalyticsLogistics and Supply Chain
Dacheng Tao
Dacheng Tao
Nanyang Technological University
artificial intelligencemachine learningcomputer visionimage processingdata mining