NILE: Internal Consistency Alignment in Large Language Models

📅 2024-12-21
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Instruction fine-tuning (IFT) data often conflicts with the internal knowledge internalized by large language models (LLMs) during pretraining, leading to alignment failure. To address this, we propose NILE, the first framework establishing the “internal consistency alignment” paradigm: it explicitly activates and rectifies the model’s internal knowledge via controllable knowledge elicitation and dynamic answer revision. Furthermore, we design a scalable Internal Consistency Filtering (ICF) method that selects high-consistency samples through self-feedback distillation, response regeneration, and consistency scoring. On Arena-Hard and Alpaca-Eval V2 benchmarks, NILE achieves +66.6% and +68.5% improvements, respectively. Ablation studies confirm the efficacy of each component. Our results demonstrate that enhancing consistency between IFT data and the model’s intrinsic knowledge is a critical pathway to unlocking LLMs’ generalization capabilities.

Technology Category

Application Category

📝 Abstract
As a crucial step to enhance LLMs alignment with human intentions, Instruction Fine-Tuning (IFT) has a high demand on dataset quality. However, existing IFT datasets often contain knowledge that is inconsistent with LLMs' internal knowledge learned from the pre-training phase, which can greatly affect the efficacy of IFT. To address this issue, we introduce NILE (iNternal consIstency aLignmEnt) framework, aimed at optimizing IFT datasets to unlock LLMs' capability further. NILE operates by eliciting target pre-trained LLM's internal knowledge corresponding to instruction data. The internal knowledge is leveraged to revise the answer in IFT datasets. Additionally, we propose a novel Internal Consistency Filtering (ICF) method to filter training samples, ensuring its high consistency with LLM's internal knowledge. Our experiments demonstrate that NILE-aligned IFT datasets sharply boost LLM performance across multiple LLM ability evaluation datasets, achieving up to 66.6% gain on Arena-Hard and 68.5% on Alpaca-Eval V2. Further analysis confirms that each component of the NILE}framework contributes to these substantial performance improvements, and provides compelling evidence that dataset consistency with pre-trained internal knowledge is pivotal for maximizing LLM potential.
Problem

Research questions and friction points this paper is trying to address.

Addresses inconsistency between instruction fine-tuning data and LLMs' internal knowledge
Proposes framework to revise dataset answers using model's internal knowledge
Introduces filtering method to ensure high consistency with pre-trained knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Revising IFT dataset answers using LLM internal knowledge
Filtering training samples for internal consistency alignment
Optimizing instruction fine-tuning by reducing knowledge conflicts
🔎 Similar Papers
No similar papers found.