EMULATE: A Multi-Agent Framework for Determining the Veracity of Atomic Claims by Emulating Human Actions

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Verifying the truthfulness of atomic claims constitutes a critical challenge in fact-checking, yet existing approaches diverge from the stepwise, evidence-driven cognitive process employed by humans—including iterative retrieval, multi-source evaluation, and structured reasoning. To bridge this gap, we propose a human-inspired multi-agent collaborative framework that emulates human verification behavior through role-specialized agents (e.g., search-and-ranking, webpage-content assessment). The framework integrates search engine API invocation, multi-stage iterative retrieval, modular large language model (LLM) agent orchestration, and rule-based evidence evaluation. This design enhances both the factual fidelity and interpretability of the verification process. Extensive experiments across multiple benchmarks demonstrate consistent improvements: our method achieves superior F1 scores and accuracy compared to state-of-the-art baselines, validating the effectiveness and advancement of the anthropomorphic multi-agent paradigm for atomic-level fact-checking.

Technology Category

Application Category

📝 Abstract
Determining the veracity of atomic claims is an imperative component of many recently proposed fact-checking systems. Many approaches tackle this problem by first retrieving evidence by querying a search engine and then performing classification by providing the evidence set and atomic claim to a large language model, but this process deviates from what a human would do in order to perform the task. Recent work attempted to address this issue by proposing iterative evidence retrieval, allowing for evidence to be collected several times and only when necessary. Continuing along this line of research, we propose a novel claim verification system, called EMULATE, which is designed to better emulate human actions through the use of a multi-agent framework where each agent performs a small part of the larger task, such as ranking search results according to predefined criteria or evaluating webpage content. Extensive experiments on several benchmarks show clear improvements over prior work, demonstrating the efficacy of our new multi-agent framework.
Problem

Research questions and friction points this paper is trying to address.

Determining veracity of atomic claims in fact-checking systems
Improving evidence retrieval by emulating human iterative processes
Enhancing claim verification using multi-agent task division
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework emulates human verification actions
Iterative evidence retrieval improves claim accuracy
Agents specialize in ranking and evaluating content
🔎 Similar Papers
No similar papers found.