Solution for Meta KDD Cup'25: A Comprehensive Three-Step Framework for Vision Question Answering

πŸ“… 2025-07-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address hallucination issues in Vision-Language Large Models (VLLMs) for Visual Question Answering (VQA), this paper proposes a three-stage Retrieval-Augmented Generation (RAG) framework. The method integrates multi-source vision–text retrieval, retrieval result re-ranking, and multi-task fine-tuning, augmented by a vision-context-aware data augmentation strategy. It natively supports multi-turn interaction and heterogeneous multimodal information fusion, thereby enhancing visual semantic understanding and alignment with external knowledge. Evaluated on the CRAG-MM benchmark across three tasks, our approach achieves automatic evaluation rankings of 3rd, 3rd, and 1st, respectively; in human evaluation on Task 3, it ranks 2nd. These results demonstrate its effectiveness in mitigating hallucinations and improving factual consistency.

Technology Category

Application Category

πŸ“ Abstract
Vision Large Language Models (VLLMs) have improved multi-modal understanding and visual question answering (VQA), but still suffer from hallucinated answers. Multi-modal Retrieval-Augmented Generation (RAG) helps address these issues by incorporating external information, yet challenges remain in visual context comprehension, multi-source retrieval, and multi-turn interactions. To address these challenges, Meta constructed the CRAG-MM benchmark and launched the CRAG-MM Challenge at KDD Cup 2025, which consists of three tasks. This paper describes the solutions of all tasks in Meta KDD Cup'25 from BlackPearl team. We use a single model for each task, with key methods including data augmentation, RAG, reranking, and multi-task fine-tuning. Our solution achieve automatic evaluation rankings of 3rd, 3rd, and 1st on the three tasks, and win second place in Task3 after human evaluation.
Problem

Research questions and friction points this paper is trying to address.

Addressing hallucinated answers in Vision Question Answering using VLLMs
Improving visual context comprehension and multi-source retrieval in RAG
Enhancing multi-turn interactions for better VQA performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data augmentation for enhanced model training
Multi-modal RAG for external information integration
Multi-task fine-tuning for improved performance
πŸ”Ž Similar Papers
No similar papers found.
Z
Zijian Zhang
MeiTuan, Shanghai, China
X
Xiaocheng Zhang
MeiTuan, BeiJing, China
Y
Yang Zhou
MeiTuan, ShangHai, China
Z
Zhimin Lin
MeiTuan, BeiJing, China
Peng Yan
Peng Yan
Research Assistant of ZHAW, PhD student of UZH
Deep LearningTransfer LearningIntelligent Algorithm