🤖 AI Summary
To address the challenges of low-quality evidence and inefficient reasoning in open-domain fact verification, this paper proposes an efficient verification framework. First, document summarization and answer reconstruction are employed to enhance evidence relevance and conciseness. Second, post-training quantization is applied to compress the model, coupled with optimized inference routing. Third, a state-of-the-art large language model backbone is integrated to strengthen semantic understanding. The proposed method achieves high accuracy—ranking second on the AVeriTeC 2025 leaderboard—while attaining the shortest end-to-end inference latency among current top-tier systems. It is the only solution among the top three that simultaneously delivers both high precision and low latency. The implementation is open-sourced and demonstrates strong potential for industrial deployment.
📝 Abstract
This paper presents HerO 2, Team HUMANE's system for the AVeriTeC shared task at the FEVER-25 workshop. HerO 2 is an enhanced version of HerO, the best-performing open-source model from the previous year's challenge. It improves evidence quality through document summarization and answer reformulation, optimizes veracity prediction via post-training quantization under computational constraints, and enhances overall system performance by integrating updated language model (LM) backbones. HerO 2 ranked second on the leaderboard while achieving the shortest runtime among the top three systems, demonstrating both high efficiency and strong potential for real-world fact verification. The code is available at https://github.com/ssu-humane/HerO2.