HCQA-1.5 @ Ego4D EgoSchema Challenge 2025

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the reliability enhancement of low-confidence answers in egocentric video question answering (EgoVQA). We propose an extended Hierarchical Confidence-based Question Answering (HCQA) framework. Methodologically, it introduces (1) a multi-source prediction aggregation and dynamic confidence filtering mechanism to enable robust cross-model ensemble, and (2) a fine-grained vision-language joint reasoning module tailored for low-confidence scenarios, integrating prompt-guided attention modeling with multiple-choice answer reranking. Evaluated on the EgoSchema blind test set (5,000+ human-annotated samples), our method achieves 77.0% accuracy—significantly surpassing the 2024 state-of-the-art and leading competition models. To the best of our knowledge, this is the first work to jointly embed confidence calibration and fine-grained cross-modal reasoning into a unified reliability optimization paradigm for EgoVQA.

Technology Category

Application Category

📝 Abstract
In this report, we present the method that achieves third place for Ego4D EgoSchema Challenge in CVPR 2025. To improve the reliability of answer prediction in egocentric video question answering, we propose an effective extension to the previously proposed HCQA framework. Our approach introduces a multi-source aggregation strategy to generate diverse predictions, followed by a confidence-based filtering mechanism that selects high-confidence answers directly. For low-confidence cases, we incorporate a fine-grained reasoning module that performs additional visual and contextual analysis to refine the predictions. Evaluated on the EgoSchema blind test set, our method achieves 77% accuracy on over 5,000 human-curated multiple-choice questions, outperforming last year's winning solution and the majority of participating teams. Our code will be added at https://github.com/Hyu-Zhang/HCQA.
Problem

Research questions and friction points this paper is trying to address.

Improving answer prediction reliability in egocentric video QA
Multi-source aggregation for diverse prediction generation
Confidence-based filtering with fine-grained reasoning refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-source aggregation for diverse predictions
Confidence-based filtering for high-confidence answers
Fine-grained reasoning for low-confidence cases
🔎 Similar Papers
No similar papers found.
H
Haoyu Zhang
Harbin Institute of Technology (Shenzhen), Pengcheng Laboratory
Yisen Feng
Yisen Feng
Harbin Institute of Technology (Shenzhen)
Multimodal Analysis
Qiaohui Chu
Qiaohui Chu
Harbin Institute of Technology (Shenzhen)
Multimodal AnalysisEgocentric Vision
M
Meng Liu
Shandong Jianzhu University
W
Weili Guan
Harbin Institute of Technology (Shenzhen), Pengcheng Laboratory
Yaowei Wang
Yaowei Wang
The Hong Kong Polytechnic University
L
Liqiang Nie
Harbin Institute of Technology (Shenzhen), Pengcheng Laboratory