🤖 AI Summary
This work addresses the insufficient adversarial robustness of existing search-augmented large language model (LLM) fact-checking systems under threat models that only assume access to inputs. To this end, the authors propose DECEIVE-AFC, a novel framework that introduces, for the first time, a claim-level black-box adversarial attack strategy. Leveraging an agent-driven trajectory search and verification mechanism, DECEIVE-AFC perturbs the retrieval and reasoning processes to generate highly effective adversarial claims without requiring access to evidence sources or internal model parameters. Experimental results demonstrate that the method significantly reduces fact-checking accuracy from 78.7% to 53.7% across multiple benchmarks and real-world systems, outperforming existing claim-level attack approaches and exhibiting strong cross-system transferability.
📝 Abstract
Fact-checking systems with search-enabled large language models (LLMs) have shown strong potential for verifying claims by dynamically retrieving external evidence. However, the robustness of such systems against adversarial attack remains insufficiently understood. In this work, we study adversarial claim attacks against search-enabled LLM-based fact-checking systems under a realistic input-only threat model. We propose DECEIVE-AFC, an agent-based adversarial attack framework that integrates novel claim-level attack strategies and adversarial claim validity evaluation principles. DECEIVE-AFC systematically explores adversarial attack trajectories that disrupt search behavior, evidence retrieval, and LLM-based reasoning without relying on access to evidence sources or model internals. Extensive evaluations on benchmark datasets and real-world systems demonstrate that our attacks substantially degrade verification performance, reducing accuracy from 78.7% to 53.7%, and significantly outperform existing claim-based attack baselines with strong cross-system transferability.