🤖 AI Summary
This work addresses the issue of factual hallucinations in large language models, which often arise from skewed distributions in pretraining corpora that assign high probability to false information. To mitigate this, the authors propose PretrainRL, a novel framework that introduces reinforcement learning into the pretraining phase for the first time. PretrainRL employs a “debias-then-learn” mechanism to actively suppress the weights associated with false statements and incorporates an efficient negative sampling strategy to reshape the model’s knowledge probability distribution. Additionally, the study introduces a new evaluation metric to assess a model’s grasp of factual knowledge. Experimental results demonstrate that PretrainRL significantly alleviates factual hallucinations across three public benchmarks, outperforming current state-of-the-art methods.
📝 Abstract
Large language models (LLMs), despite their powerful capabilities, suffer from factual hallucinations where they generate verifiable falsehoods. We identify a root of this issue: the imbalanced data distribution in the pretraining corpus, which leads to a state of"low-probability truth"and"high-probability falsehood". Recent approaches, such as teaching models to say"I don't know"or post-hoc knowledge editing, either evade the problem or face catastrophic forgetting. To address this issue from its root, we propose \textbf{PretrainRL}, a novel framework that integrates reinforcement learning into the pretraining phase to consolidate factual knowledge. The core principle of PretrainRL is"\textbf{debiasing then learning}."It actively reshapes the model's probability distribution by down-weighting high-probability falsehoods, thereby making"room"for low-probability truths to be learned effectively. To enable this, we design an efficient negative sampling strategy to discover these high-probability falsehoods and introduce novel metrics to evaluate the model's probabilistic state concerning factual knowledge. Extensive experiments on three public benchmarks demonstrate that PretrainRL significantly alleviates factual hallucinations and outperforms state-of-the-art methods.