FaceCoT: A Benchmark Dataset for Face Anti-Spoofing with Chain-of-Thought Reasoning

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Face anti-spoofing (FAS) suffers from poor cross-domain generalization of unimodal models and a severe scarcity of high-quality vision-language data. To address these challenges, we introduce FaceCoT—the first large-scale vision-language VQA benchmark tailored for FAS—covering 14 distinct presentation attack types and featuring novel chain-of-thought (CoT) question-answering annotations. We propose CoT-enhanced progressive learning (CEPL), a training strategy that integrates multi-modal large language models (MLLMs), VQA, CoT reasoning, and progressive supervision. Additionally, we design a reinforcement learning–driven vision-language generation model to synthesize high-fidelity, CoT-annotated samples. Evaluated on multiple mainstream FAS benchmarks, our approach achieves significant improvements over state-of-the-art methods. Results demonstrate that CoT-annotated data and CEPL jointly enhance generalization across devices and attack types, while improving model interpretability and robustness.

Technology Category

Application Category

📝 Abstract
Face Anti-Spoofing (FAS) typically depends on a single visual modality when defending against presentation attacks such as print attacks, screen replays, and 3D masks, resulting in limited generalization across devices, environments, and attack types. Meanwhile, Multimodal Large Language Models (MLLMs) have recently achieved breakthroughs in image-text understanding and semantic reasoning, suggesting that integrating visual and linguistic co-inference into FAS can substantially improve both robustness and interpretability. However, the lack of a high-quality vision-language multimodal dataset has been a critical bottleneck. To address this, we introduce FaceCoT (Face Chain-of-Thought), the first large-scale Visual Question Answering (VQA) dataset tailored for FAS. FaceCoT covers 14 spoofing attack types and enriches model learning with high-quality CoT VQA annotations. Meanwhile, we develop a caption model refined via reinforcement learning to expand the dataset and enhance annotation quality. Furthermore, we introduce a CoT-Enhanced Progressive Learning (CEPL) strategy to better leverage the CoT data and boost model performance on FAS tasks. Extensive experiments demonstrate that models trained with FaceCoT and CEPL outperform state-of-the-art methods on multiple benchmark datasets.
Problem

Research questions and friction points this paper is trying to address.

Limited generalization in Face Anti-Spoofing due to single visual modality
Lack of high-quality vision-language dataset for FAS improvement
Need for robust and interpretable FAS with multimodal reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces FaceCoT dataset for FAS
Uses reinforcement learning for annotations
Develops CoT-Enhanced Progressive Learning
🔎 Similar Papers
No similar papers found.
H
Honglu Zhang
Didi Chuxing
Zhiqin Fang
Zhiqin Fang
北京邮电大学
MLLMs
N
Ningning Zhao
Didi Chuxing
Saihui Hou
Saihui Hou
Beijing Normal University
Deep LearningComputer VisionMultimodal Large Language Models
Long Ma
Long Ma
Dalian University of Technology
Computer VisionImage Processing
R
Renwang Pei
Didi Chuxing
Z
Zhaofeng He
Beijing University of Posts and Telecommunications