Text-Audio-Visual-conditioned Diffusion Model for Video Saliency Prediction

📅 2025-04-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the underutilization of multimodal information in video saliency prediction by proposing, for the first time, a conditional diffusion-based generation framework that jointly leverages visual, auditory, and textual modalities. Methodologically, we design the Saliency-DiT architecture to decouple timestep embeddings from multimodal conditioning, and introduce a Semantically Aligned Image-Text Response (SITR) guidance mechanism to jointly model sound-source localization and text-driven cross-modal saliency coordination. Experiments demonstrate consistent improvements over state-of-the-art unimodal and bimodal approaches: +1.03% in SIM, +2.35% in CC, +2.71% in NSS, and +0.33% in AUC-J. These results validate the effectiveness and advancement of trimodal diffusion modeling for spatiotemporal saliency map generation.

Technology Category

Application Category

📝 Abstract
Video saliency prediction is crucial for downstream applications, such as video compression and human-computer interaction. With the flourishing of multimodal learning, researchers started to explore multimodal video saliency prediction, including audio-visual and text-visual approaches. Auditory cues guide the gaze of viewers to sound sources, while textual cues provide semantic guidance for understanding video content. Integrating these complementary cues can improve the accuracy of saliency prediction. Therefore, we attempt to simultaneously analyze visual, auditory, and textual modalities in this paper, and propose TAVDiff, a Text-Audio-Visual-conditioned Diffusion Model for video saliency prediction. TAVDiff treats video saliency prediction as an image generation task conditioned on textual, audio, and visual inputs, and predicts saliency maps through stepwise denoising. To effectively utilize text, a large multimodal model is used to generate textual descriptions for video frames and introduce a saliency-oriented image-text response (SITR) mechanism to generate image-text response maps. It is used as conditional information to guide the model to localize the visual regions that are semantically related to the textual description. Regarding the auditory modality, it is used as another conditional information for directing the model to focus on salient regions indicated by sounds. At the same time, since the diffusion transformer (DiT) directly concatenates the conditional information with the timestep, which may affect the estimation of the noise level. To achieve effective conditional guidance, we propose Saliency-DiT, which decouples the conditional information from the timestep. Experimental results show that TAVDiff outperforms existing methods, improving 1.03%, 2.35%, 2.71% and 0.33% on SIM, CC, NSS and AUC-J metrics, respectively.
Problem

Research questions and friction points this paper is trying to address.

Predict video saliency using text-audio-visual multimodal inputs
Enhance saliency prediction accuracy via complementary auditory and textual cues
Decouple conditional information from timestep for effective noise estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Text-Audio-Visual-conditioned Diffusion Model
Saliency-oriented image-text response mechanism
Decoupled conditional guidance in Saliency-DiT
🔎 Similar Papers
No similar papers found.
L
Li Yu
School of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044, China, and also with the Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology, Nanjing University of Information Science and Technology, Nanjing 210044, China
X
Xuanzhe Sun
School of Computer science, Nanjing University of Information Science and Technology, Nanjing 210044, China
W
Wei Zhou
School of Computer Science and Informatics, Cardiff University, F244AG Cardiff, U.K.
Moncef Gabbouj
Moncef Gabbouj
Professor, Tampere University
Machine learningArtificial intelligenceSignal processingimage processingvideo communication