π€ AI Summary
This work addresses the challenge of detecting AI-generated images and identifying their source generative models by proposing a multimodal multitask learning framework. The approach integrates textual features extracted via BERT with visual features obtained from the CLIP image encoder, employing a cross-modal fusion architecture and a tailored multitask loss function. To enhance model generalization, a pseudo-label-based data augmentation strategy is introduced to expand the training dataset. Evaluated on the CT2 competition, the method achieved fifth place in both Task A (AI-generated image detection) and Task B (source model attribution), attaining F1 scores of 83.16% and 48.88%, respectively, thereby demonstrating its effectiveness and technical novelty.
π Abstract
With the aim of detecting AI-generated images and identifying the specific models responsible for their generation, we propose a multi-modal multi-task model. The model leverages pre-trained BERT and CLIP Vision encoders for text and image feature extraction, respectively, and employs cross-modal feature fusion with a tailored multi-task loss function. Additionally, a pseudo-labeling-based data augmentation strategy was utilized to expand the training dataset with high-confidence samples. The model achieved fifth place in both Tasks A and B of the `CT2: AI-Generated Image Detection'competition, with F1 scores of 83.16\% and 48.88\%, respectively. These findings highlight the effectiveness of the proposed architecture and its potential for advancing AI-generated content detection in real-world scenarios. The source code for our method is published on https://github.com/xxxxxxxxy/AIGeneratedImageDetection.