Bring Your Own Knowledge: A Survey of Methods for LLM Knowledge Expansion

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses three core challenges in large language models (LLMs): difficulty in dynamically expanding knowledge, weak integration of heterogeneous multi-source knowledge, and insufficient long-term consistency guarantees. To tackle these, we propose the first unified analytical framework that systematically integrates four complementary paradigms: continual learning, parametric model editing, retrieval-augmented generation (RAG), and implicit preference modeling. We introduce a structured taxonomy of knowledge types—factual, domain-specific, linguistic, and preference-based—and characterize their evolution along three dimensions: consistency, scalability, and verifiability, thereby constructing a comprehensive methodology map for knowledge expansion. Our key contributions include: (1) establishing a cross-paradigm evaluation benchmark; (2) advocating modular, composable adaptation design; and (3) advancing standardization in evaluation protocols. The framework provides both theoretical foundations and practical guidelines for developing evolvable, trustworthy, and scenario-adaptive knowledge-enhanced LLMs. (149 words)

Technology Category

Application Category

📝 Abstract
Adapting large language models (LLMs) to new and diverse knowledge is essential for their lasting effectiveness in real-world applications. This survey provides an overview of state-of-the-art methods for expanding the knowledge of LLMs, focusing on integrating various knowledge types, including factual information, domain expertise, language proficiency, and user preferences. We explore techniques, such as continual learning, model editing, and retrieval-based explicit adaptation, while discussing challenges like knowledge consistency and scalability. Designed as a guide for researchers and practitioners, this survey sheds light on opportunities for advancing LLMs as adaptable and robust knowledge systems.
Problem

Research questions and friction points this paper is trying to address.

Expand LLMs with diverse knowledge types
Address knowledge consistency and scalability challenges
Advance LLMs as adaptable knowledge systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continual learning for LLM adaptation.
Model editing to update knowledge.
Retrieval-based explicit adaptation methods.