π€ AI Summary
This work addresses the limited metacognitive awareness of large language models, which often struggle to accurately assess whether they possess the knowledge required to answer a given question. To bridge this gap, the authors propose Evolution Strategy-based Metacognitive Alignment (ESMA), a novel approach that quantifies metacognitive capability and integrates it into the fine-tuning process. ESMA employs a dual-prompt mechanism to evaluate and enhance the alignment between a modelβs internal self-assessment and its external behavioral outputs. Experimental results demonstrate that by adjusting only a small set of critical parameters, ESMA significantly improves both metacognitive accuracy and generalization across seen and unseen tasks, marking a meaningful step toward more self-aware and reliable language models.
π Abstract
Metacognition is a critical component of intelligence, specifically regarding the awareness of one's own knowledge. While humans rely on shared internal memory for both answering questions and reporting their knowledge state, this dependency in LLMs remains underexplored. This study proposes a framework to measure metacognitive ability $d_{\rm{type2}}'$ using a dual-prompt method, followed by the introduction of Evolution Strategy for Metacognitive Alignment (ESMA) to bind a model's internal knowledge to its explicit behaviors. ESMA demonstrates robust generalization across diverse untrained settings, indicating a enhancement in the model's ability to reference its own knowledge. Furthermore, parameter analysis attributes these improvements to a sparse set of significant modifications.