VAULT: Vigilant Adversarial Updates via LLM-Driven Retrieval-Augmented Generation for NLI

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of natural language inference (NLI) models, this paper proposes a fully automated adversarial RAG framework that systematically identifies and mitigates model vulnerabilities via a closed-loop pipeline comprising retrieval, adversarial hypothesis generation, and iterative retraining. Methodologically: (i) it introduces a few-shot retrieval strategy integrating semantic similarity (BGE) and lexical matching (BM25); (ii) it employs multi-LLM ensemble prompting to generate high-quality adversarial hypotheses while rigorously verifying label fidelity; and (iii) it performs robust fine-tuning by progressively injecting adversarial examples into training data using a hybrid sampling ratio. On SNLI, ANLI, and MultiNLI, RoBERTa-base achieves accuracies of 92.60%, 80.95%, and 71.99%, respectively—surpassing state-of-the-art contextual adversarial methods by up to 2.0%. This work marks the first fully automated, large-scale construction of high-fidelity adversarial data and corresponding model enhancement without human intervention.

Technology Category

Application Category

📝 Abstract
We introduce VAULT, a fully automated adversarial RAG pipeline that systematically uncovers and remedies weaknesses in NLI models through three stages: retrieval, adversarial generation, and iterative retraining. First, we perform balanced few-shot retrieval by embedding premises with both semantic (BGE) and lexical (BM25) similarity. Next, we assemble these contexts into LLM prompts to generate adversarial hypotheses, which are then validated by an LLM ensemble for label fidelity. Finally, the validated adversarial examples are injected back into the training set at increasing mixing ratios, progressively fortifying a zero-shot RoBERTa-base model.On standard benchmarks, VAULT elevates RoBERTa-base accuracy from 88.48% to 92.60% on SNLI +4.12%, from 75.04% to 80.95% on ANLI +5.91%, and from 54.67% to 71.99% on MultiNLI +17.32%. It also consistently outperforms prior in-context adversarial methods by up to 2.0% across datasets. By automating high-quality adversarial data curation at scale, VAULT enables rapid, human-independent robustness improvements in NLI inference tasks.
Problem

Research questions and friction points this paper is trying to address.

Automatically uncovers weaknesses in NLI models
Generates adversarial hypotheses via LLM prompts
Improves model robustness through iterative retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated adversarial RAG pipeline for NLI
Balanced retrieval with semantic and lexical similarity
Iterative retraining with validated adversarial examples
🔎 Similar Papers
No similar papers found.