π€ AI Summary
This work addresses the lack of systematic evaluation of multimodal large language modelsβ (MLLMs) capability to detect subtle visual cues in facial spoofing and forgery detection. We introduce SHIELD, the first multimodal benchmark tailored for face security, supporting RGB, infrared, depth, and audio inputs, as well as GAN- and diffusion-based synthetic forgeries, with tasks including binary authenticity classification and multiple-choice question answering. To enable structured reasoning, we propose Multi-Attribute Chain-of-Thought (MA-COT), a novel inference paradigm that disentangles task-relevant and task-irrelevant visual attributes. SHIELD is the first to systematically assess MLLMsβ cross-modal face attack detection performance under zero-shot, few-shot, and CoT settings. Experiments demonstrate that MLLMs achieve superior generalization and interpretability, significantly outperforming baseline methods. Our work establishes a new evaluation paradigm and technical foundation for biometric security.
π Abstract
Multimodal large language models (MLLMs) have demonstrated strong capabilities in vision-related tasks, capitalizing on their visual semantic comprehension and reasoning capabilities. However, their ability to detect subtle visual spoofing and forgery clues in face attack detection tasks remains underexplored. In this paper, we introduce a benchmark, SHIELD, to evaluate MLLMs for face spoofing and forgery detection. Specifically, we design true/false and multiple-choice questions to assess MLLM performance on multimodal face data across two tasks. For the face anti-spoofing task, we evaluate three modalities (i.e., RGB, infrared, and depth) under six attack types. For the face forgery detection task, we evaluate GAN-based and diffusion-based data, incorporating visual and acoustic modalities. We conduct zero-shot and few-shot evaluations in standard and chain of thought (COT) settings. Additionally, we propose a novel multi-attribute chain of thought (MA-COT) paradigm for describing and judging various task-specific and task-irrelevant attributes of face images. The findings of this study demonstrate that MLLMs exhibit strong potential for addressing the challenges associated with the security of facial recognition technology applications.