🤖 AI Summary
This work addresses the underutilization of multimodal information in video saliency prediction by proposing, for the first time, a conditional diffusion-based generation framework that jointly leverages visual, auditory, and textual modalities. Methodologically, we design the Saliency-DiT architecture to decouple timestep embeddings from multimodal conditioning, and introduce a Semantically Aligned Image-Text Response (SITR) guidance mechanism to jointly model sound-source localization and text-driven cross-modal saliency coordination. Experiments demonstrate consistent improvements over state-of-the-art unimodal and bimodal approaches: +1.03% in SIM, +2.35% in CC, +2.71% in NSS, and +0.33% in AUC-J. These results validate the effectiveness and advancement of trimodal diffusion modeling for spatiotemporal saliency map generation.
📝 Abstract
Video saliency prediction is crucial for downstream applications, such as video compression and human-computer interaction. With the flourishing of multimodal learning, researchers started to explore multimodal video saliency prediction, including audio-visual and text-visual approaches. Auditory cues guide the gaze of viewers to sound sources, while textual cues provide semantic guidance for understanding video content. Integrating these complementary cues can improve the accuracy of saliency prediction. Therefore, we attempt to simultaneously analyze visual, auditory, and textual modalities in this paper, and propose TAVDiff, a Text-Audio-Visual-conditioned Diffusion Model for video saliency prediction. TAVDiff treats video saliency prediction as an image generation task conditioned on textual, audio, and visual inputs, and predicts saliency maps through stepwise denoising. To effectively utilize text, a large multimodal model is used to generate textual descriptions for video frames and introduce a saliency-oriented image-text response (SITR) mechanism to generate image-text response maps. It is used as conditional information to guide the model to localize the visual regions that are semantically related to the textual description. Regarding the auditory modality, it is used as another conditional information for directing the model to focus on salient regions indicated by sounds. At the same time, since the diffusion transformer (DiT) directly concatenates the conditional information with the timestep, which may affect the estimation of the noise level. To achieve effective conditional guidance, we propose Saliency-DiT, which decouples the conditional information from the timestep. Experimental results show that TAVDiff outperforms existing methods, improving 1.03%, 2.35%, 2.71% and 0.33% on SIM, CC, NSS and AUC-J metrics, respectively.