๐ค AI Summary
To address hallucination in large language models (LLMs), this paper proposes a knowledge refinement paradigm that emulates human cognitive reflection. It dynamically constructs positive and negative contrastive samples based on the modelโs internal knowledge state, thereby reinforcing correct knowledge, deepening ambiguous knowledge, forgetting erroneous knowledge, and enabling honest refusal to answer unknown queries. We introduce the first knowledge-state-aware adaptive contrastive learning mechanism, integrating implicit knowledge representation alignment with activation-based analysis of LLM internals to guide sample construction. Evaluated on TruthfulQA, FactScore, and Self-Knowledge benchmarks, our method significantly improves factual consistency and knowledge honesty, reducing average hallucination rate by 32.7%โwithout relying on external knowledge sources or human annotations.
๐ Abstract
How to alleviate the hallucinations of Large Language Models (LLMs) has always been the fundamental goal pursued by the LLMs research community. Looking through numerous hallucination-related studies, a mainstream category of methods is to reduce hallucinations by optimizing the knowledge representation of LLMs to change their output. Considering that the core focus of these works is the knowledge acquired by models, and knowledge has long been a central theme in human societal progress, we believe that the process of models refining knowledge can greatly benefit from the way humans learn. In our work, by imitating the human learning process, we design an Adaptive Contrastive Learning strategy. Our method flexibly constructs different positive and negative samples for contrastive learning based on LLMs' actual mastery of knowledge. This strategy helps LLMs consolidate the correct knowledge they already possess, deepen their understanding of the correct knowledge they have encountered but not fully grasped, forget the incorrect knowledge they previously learned, and honestly acknowledge the knowledge they lack. Extensive experiments and detailed analyses on widely used datasets demonstrate the effectiveness of our method.