๐ค AI Summary
This work addresses the challenge of insufficient effective fusion mechanisms in multimodal audio-visual deepfake detection and localization by proposing a two-stage divide-and-conquer framework. In the first stage, forgery detection and fine-grained tampering localization are performed independently within each modalityโaudio and visual. The second stage integrates these modality-specific outputs through a data-driven cross-modal score fusion strategy. By synergistically combining intra-modal precise localization with inter-modal discriminative cues, the proposed approach significantly enhances system robustness and generalization capability. Evaluated on the DDL Challenge Track 2 test set, the method achieves an AUC of 0.87, an average precision (AP) of 0.55, an average recall (AR) of 0.23, and a composite score of 0.5528.
๐ Abstract
This paper presents a system for detecting fake audio-visual content (i.e., video deepfake), developed for Track 2 of the DDL Challenge. The proposed system employs a two-stage framework, comprising unimodal detection and multimodal score fusion. Specifically, it incorporates an audio deepfake detection module and an audio localization module to analyze and pinpoint manipulated segments in the audio stream. In parallel, an image-based deepfake detection and localization module is employed to process the visual modality. To effectively leverage complementary information across different modalities, we further propose a multimodal score fusion strategy that integrates the outputs from both audio and visual modules. Guided by a detailed analysis of the training and evaluation dataset, we explore and evaluate several score calculation and fusion strategies to improve system robustness. Overall, the final fusion-based system achieves an AUC of 0.87, an AP of 0.55, and an AR of 0.23 on the challenge test set, resulting in a final score of 0.5528.