🤖 AI Summary
This work addresses the limitations of current large language models in deep domain knowledge comprehension and efficient active learning. To overcome these challenges, the authors propose KA2L, a knowledge-aware active learning framework that innovatively integrates knowledge distribution probing with hidden state decoding. By analyzing the knowledge acquisition status within the hidden representations across Transformer layers, KA2L identifies regions of unknown knowledge and generates targeted queries to guide model training more effectively. Experimental results demonstrate that KA2L achieves significant performance gains on three benchmark datasets while requiring only 50% of the annotation and computational costs compared to conventional approaches, thereby substantially enhancing the efficiency of domain-specific knowledge acquisition in large language models.
📝 Abstract
Fine-tuning large language models (LLMs) with high-quality knowledge has been shown to enhance their performance effectively. However, there is a paucity of research on the depth of domain-specific knowledge comprehension by LLMs and the application of targeted active learning to improve their expertise. To address this gap, we introduce the Knowledge-Aware Active Learning (KA2L) framework. This framework assesses LLMs' mastery of specific knowledge points to aid in constructing unanswerable or unknowable questions through latent space analysis. This active learning strategy enhances training efficiency by focusing on knowledge the model has yet to master, thereby minimizing redundancy in learning already acquired information. This study innovatively employs a knowledge distribution probing technique to examine the hidden states of specific Transformer layers and identify the distribution of known and unknown knowledge within the LLM. Additionally, a hidden-state decoding method is proposed to generate numerous unknown questions in natural language from the latent knowledge space. In our experiments, we selected nine open-source LLMs to validate the effectiveness of the proposed framework. Results indicate that KA2L not only significantly reduces 50% annotation and computation costs across two open-domain and one vertical-domain dataset but also achieves better performance, offering valuable insights into active learning strategies for LLMs. The code is available at https://anonymous.4open.science/r/KA2L-F15C.