🤖 AI Summary
To address the poor adaptability of KBQA systems on small-scale domain knowledge bases, weak unsupervised performance, and high cost of supervised fine-tuning, this paper proposes a self-supervised alignment framework that requires neither human annotations nor supervision from external large language models. Our core innovation lies in a multi-granularity self-labeling and iterative self-validation mechanism, enabling intrinsic capability-driven joint optimization of retrieval and generation. The framework integrates self-supervised learning, knowledge distillation, and lightweight parameter updates. Evaluated across multiple domain-specific KBQA tasks, our method achieves 90% of the performance gain attained by GPT-4–supervised fine-tuning—while incurring zero annotation cost and no LLM API calls—significantly outperforming existing unsupervised baselines. For full reproducibility, we release all code, datasets, and end-to-end analytical artifacts.
📝 Abstract
Although retrieval-augmented generation (RAG) remains essential for knowledge-based question answering (KBQA), current paradigms face critical challenges under specific domains. Existing methods struggle with targeted adaptation on small-scale KBs: vanilla unsupervised training exhibits poor effectiveness, while fine-tuning incurs prohibitive costs of external signals. We present KBAlign, a self-supervised framework that enhances RAG systems through efficient model adaptation. Our key insight is to leverage the model's intrinsic capabilities for knowledge alignment through two innovative mechanisms: multi-grained self-annotation that captures global knowledge for data construction, and iterative tuning that accelerates convergence through self verification. This framework enables cost-effective model adaptation to specific textual KBs, without human supervision or external model assistance. Experiments demonstrate that KBAlign can achieve 90% of the performance gain obtained through GPT-4-supervised adaptation, while relying entirely on self-annotation of much smaller models. KBAlign significantly improves downstream QA accuracy across multiple domains with tiny costs, particularly benefiting scenarios requiring deep knowledge integration from specialized corpora. We release our experimental data, models, and process analyses to the community for further exploration (https://github.com/thunlp/KBAlign).