🤖 AI Summary
This study addresses critical trustworthiness risks—factual inaccuracies, harmful content, bias, hallucinations, and privacy leakage—in video large language models (Video-LLMs). To this end, we propose the first five-dimensional trustworthiness evaluation framework, encompassing factual accuracy, safety, robustness, fairness, and privacy. We introduce Trust-videoLLMs, a benchmark comprising 30 dynamic visual and cross-modal tasks, built upon a spatiotemporally aware synthetic-and-annotated hybrid dataset. Methodologically, we innovate with multimodal prompt engineering, dynamic video sampling and perturbation injection, cross-modal consistency verification, and quantitative privacy risk analysis. Comprehensive evaluation of 23 state-of-the-art Video-LLMs reveals significant vulnerabilities under dynamic scenes and cross-modal perturbations. Finally, we open-source an extensible evaluation toolkit to advance standardization in trustworthy video AI.
📝 Abstract
Recent advancements in multimodal large language models for video understanding (videoLLMs) have improved their ability to process dynamic multimodal data. However, trustworthiness challenges factual inaccuracies, harmful content, biases, hallucinations, and privacy risks, undermine reliability due to video data's spatiotemporal complexities. This study introduces Trust-videoLLMs, a comprehensive benchmark evaluating videoLLMs across five dimensions: truthfulness, safety, robustness, fairness, and privacy. Comprising 30 tasks with adapted, synthetic, and annotated videos, the framework assesses dynamic visual scenarios, cross-modal interactions, and real-world safety concerns. Our evaluation of 23 state-of-the-art videoLLMs (5 commercial,18 open-source) reveals significant limitations in dynamic visual scene understanding and cross-modal perturbation resilience. Open-source videoLLMs show occasional truthfulness advantages but inferior overall credibility compared to commercial models, with data diversity outperforming scale effects. These findings highlight the need for advanced safety alignment to enhance capabilities. Trust-videoLLMs provides a publicly available, extensible toolbox for standardized trustworthiness assessments, bridging the gap between accuracy-focused benchmarks and critical demands for robustness, safety, fairness, and privacy.