Team HUMANE at AVeriTeC 2025: HerO 2 for Efficient Fact Verification

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of low-quality evidence and inefficient reasoning in open-domain fact verification, this paper proposes an efficient verification framework. First, document summarization and answer reconstruction are employed to enhance evidence relevance and conciseness. Second, post-training quantization is applied to compress the model, coupled with optimized inference routing. Third, a state-of-the-art large language model backbone is integrated to strengthen semantic understanding. The proposed method achieves high accuracy—ranking second on the AVeriTeC 2025 leaderboard—while attaining the shortest end-to-end inference latency among current top-tier systems. It is the only solution among the top three that simultaneously delivers both high precision and low latency. The implementation is open-sourced and demonstrates strong potential for industrial deployment.

Technology Category

Application Category

📝 Abstract
This paper presents HerO 2, Team HUMANE's system for the AVeriTeC shared task at the FEVER-25 workshop. HerO 2 is an enhanced version of HerO, the best-performing open-source model from the previous year's challenge. It improves evidence quality through document summarization and answer reformulation, optimizes veracity prediction via post-training quantization under computational constraints, and enhances overall system performance by integrating updated language model (LM) backbones. HerO 2 ranked second on the leaderboard while achieving the shortest runtime among the top three systems, demonstrating both high efficiency and strong potential for real-world fact verification. The code is available at https://github.com/ssu-humane/HerO2.
Problem

Research questions and friction points this paper is trying to address.

Improves evidence quality via summarization and reformulation
Optimizes veracity prediction with post-training quantization
Enhances performance using updated LM backbones efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Improves evidence via summarization and reformulation
Optimizes prediction with post-training quantization
Enhances performance using updated LM backbones
🔎 Similar Papers
No similar papers found.