🤖 AI Summary
This work addresses the challenge of hallucination in large language models during fine-tuning, which arises from inconsistencies between pretraining and fine-tuning knowledge and impairs the model’s ability to recognize when it should abstain from answering. To mitigate this, the authors propose a knowledge-aware fine-tuning approach that estimates the model’s instance-level knowledge confidence via multi-sample inference, yielding a fine-grained knowledge scoring mechanism. This score is then used to weight the fine-tuning signals, explicitly guiding the model to respond with “I don’t know” for unfamiliar inputs. By integrating uncertainty modeling with a refusal mechanism, the method preserves high accuracy on known questions while substantially improving the detection and rejection of unknown ones. The study also introduces a novel evaluation metric for model uncertainty, empirically validating the effectiveness of the proposed strategy.
📝 Abstract
While large language models (LLMs) demonstrate strong capabilities across diverse user queries, they still suffer from hallucinations, often arising from knowledge misalignment between pre-training and fine-tuning. To address this misalignment, we reliably estimate a fine-grained, instance-level knowledge score via multi-sampled inference. Using the knowledge score, we scale the learning signal according to the model's existing knowledge, while encouraging explicit "I don't know" responses for out-of-scope queries. Experimental results show that this approach allows the model to explicitly express uncertainty when it lacks knowledge, while maintaining accuracy on questions it can answer. Furthermore, we propose evaluation metrics for uncertainty, showing that accurate discrimination between known and unknown instances consistently improves performance.