🤖 AI Summary
Large language models (LLMs) possess vast knowledge but suffer from ill-defined knowledge boundaries and hallucination-prone knowledge retrieval. Existing work lacks a formal, systematic characterization of these boundaries. This paper introduces the first formal definition of LLM knowledge boundaries and proposes the first taxonomy of LLM knowledge—categorizing knowledge into factual, procedural, context-dependent, and time-sensitive types. We further develop an integrated “motivation–identification–mitigation” analytical framework. Leveraging knowledge taxonomy modeling, systematic literature review, and cross-methodological synthesis, we identify six open challenges. Our work unifies fragmented research paradigms and establishes a theoretical foundation and practical guidance for designing knowledge-aware LLMs.
📝 Abstract
Although large language models (LLMs) store vast amount of knowledge in their parameters, they still have limitations in the memorization and utilization of certain knowledge, leading to undesired behaviors such as generating untruthful and inaccurate responses. This highlights the critical need to understand the knowledge boundary of LLMs, a concept that remains inadequately defined in existing research. In this survey, we propose a comprehensive definition of the LLM knowledge boundary and introduce a formalized taxonomy categorizing knowledge into four distinct types. Using this foundation, we systematically review the field through three key lenses: the motivation for studying LLM knowledge boundaries, methods for identifying these boundaries, and strategies for mitigating the challenges they present. Finally, we discuss open challenges and potential research directions in this area. We aim for this survey to offer the community a comprehensive overview, facilitate access to key issues, and inspire further advancements in LLM knowledge research.