Know More, Know Clearer: A Meta-Cognitive Framework for Knowledge Augmentation in Large Language Models

📅 2026-02-13
📈 Citations: 0
Influential: 0
📄 PDF

Technology Category

Application Category

📝 Abstract
Knowledge augmentation has significantly enhanced the performance of Large Language Models (LLMs) in knowledge-intensive tasks. However, existing methods typically operate on the simplistic premise that model performance equates with internal knowledge, overlooking the knowledge-confidence gaps that lead to overconfident errors or uncertain truths. To bridge this gap, we propose a novel meta-cognitive framework for reliable knowledge augmentation via differentiated intervention and alignment. Our approach leverages internal cognitive signals to partition the knowledge space into mastered, confused, and missing regions, guiding targeted knowledge expansion. Furthermore, we introduce a cognitive consistency mechanism to synchronize subjective certainty with objective accuracy, ensuring calibrated knowledge boundaries. Extensive experiments demonstrate the our framework consistently outperforms strong baselines, validating its rationality in not only enhancing knowledge capabilities but also fostering cognitive behaviors that better distinguish knowns from unknowns.
Problem

Research questions and friction points this paper is trying to address.

knowledge augmentation
large language models
knowledge-confidence gap
overconfident errors
cognitive calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

meta-cognitive framework
knowledge augmentation
cognitive consistency
confidence calibration
knowledge space partitioning
🔎 Similar Papers