🤖 AI Summary
Existing long-text fact-checking methods decompose texts into atomic claims and apply fixed-step retrieval-verification pipelines, resulting in low efficiency, underutilization of the LLM’s internal knowledge, and deviation from human-like iterative verification.
Method: We propose a confidence-driven iterative retrieval-verification collaborative agent framework, where a large language model serves as an intelligent agent that dynamically orchestrates retrieval, performs adaptive termination decisions, and aggregates evidence across multiple rounds for holistic verification—integrating retrieval and verification into a unified, self-regulating loop.
Contribution/Results: Our method emulates human incremental verification while maintaining comparable accuracy. It reduces LLM invocation cost by 7.6× and search cost by 16.5×, significantly improving computational efficiency and practical deployability.
📝 Abstract
Fact-checking long-form text is challenging, and it is therefore common practice to break it down into multiple atomic claims. The typical approach to fact-checking these atomic claims involves retrieving a fixed number of pieces of evidence, followed by a verification step. However, this method is usually not cost-effective, as it underutilizes the verification model's internal knowledge of the claim and fails to replicate the iterative reasoning process in human search strategies. To address these limitations, we propose FIRE, a novel agent-based framework that integrates evidence retrieval and claim verification in an iterative manner. Specifically, FIRE employs a unified mechanism to decide whether to provide a final answer or generate a subsequent search query, based on its confidence in the current judgment. We compare FIRE with other strong fact-checking frameworks and find that it achieves slightly better performance while reducing large language model (LLM) costs by an average of 7.6 times and search costs by 16.5 times. These results indicate that FIRE holds promise for application in large-scale fact-checking operations. Our code is available at https://github.com/mbzuai-nlp/fire.git.