🤖 AI Summary
This work addresses the reliability enhancement of low-confidence answers in egocentric video question answering (EgoVQA). We propose an extended Hierarchical Confidence-based Question Answering (HCQA) framework. Methodologically, it introduces (1) a multi-source prediction aggregation and dynamic confidence filtering mechanism to enable robust cross-model ensemble, and (2) a fine-grained vision-language joint reasoning module tailored for low-confidence scenarios, integrating prompt-guided attention modeling with multiple-choice answer reranking. Evaluated on the EgoSchema blind test set (5,000+ human-annotated samples), our method achieves 77.0% accuracy—significantly surpassing the 2024 state-of-the-art and leading competition models. To the best of our knowledge, this is the first work to jointly embed confidence calibration and fine-grained cross-modal reasoning into a unified reliability optimization paradigm for EgoVQA.
📝 Abstract
In this report, we present the method that achieves third place for Ego4D EgoSchema Challenge in CVPR 2025. To improve the reliability of answer prediction in egocentric video question answering, we propose an effective extension to the previously proposed HCQA framework. Our approach introduces a multi-source aggregation strategy to generate diverse predictions, followed by a confidence-based filtering mechanism that selects high-confidence answers directly. For low-confidence cases, we incorporate a fine-grained reasoning module that performs additional visual and contextual analysis to refine the predictions. Evaluated on the EgoSchema blind test set, our method achieves 77% accuracy on over 5,000 human-curated multiple-choice questions, outperforming last year's winning solution and the majority of participating teams. Our code will be added at https://github.com/Hyu-Zhang/HCQA.