From Sight to Insight: Unleashing Eye-Tracking in Weakly Supervised Video Salient Object Detection

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the scarcity of strong supervision in video salient object detection (VSOD) by proposing a novel weakly supervised approach leveraging eye-tracking signals. The method introduces three key components: (1) a Position-Semantic Embedding (PSE) module that jointly models spatial gaze distributions and semantic priors; (2) a Semantic-Local Query competition (SLQ) mechanism that dynamically enhances representations of salient regions; and (3) an Intra-Inter Mixed Contrastive (IIMC) learning paradigm that enforces consistency of weak supervision at both intra-frame and inter-frame granularities. Evaluated on five mainstream VSOD benchmarks, the framework consistently outperforms existing methods, achieving state-of-the-art performance across multiple metrics. Comprehensive experiments demonstrate its effectiveness, robustness, and generalization capability under weak supervision.

Technology Category

Application Category

📝 Abstract
The eye-tracking video saliency prediction (VSP) task and video salient object detection (VSOD) task both focus on the most attractive objects in video and show the result in the form of predictive heatmaps and pixel-level saliency masks, respectively. In practical applications, eye tracker annotations are more readily obtainable and align closely with the authentic visual patterns of human eyes. Therefore, this paper aims to introduce fixation information to assist the detection of video salient objects under weak supervision. On the one hand, we ponder how to better explore and utilize the information provided by fixation, and then propose a Position and Semantic Embedding (PSE) module to provide location and semantic guidance during the feature learning process. On the other hand, we achieve spatiotemporal feature modeling under weak supervision from the aspects of feature selection and feature contrast. A Semantics and Locality Query (SLQ) Competitor with semantic and locality constraints is designed to effectively select the most matching and accurate object query for spatiotemporal modeling. In addition, an Intra-Inter Mixed Contrastive (IIMC) model improves the spatiotemporal modeling capabilities under weak supervision by forming an intra-video and inter-video contrastive learning paradigm. Experimental results on five popular VSOD benchmarks indicate that our model outperforms other competitors on various evaluation metrics.
Problem

Research questions and friction points this paper is trying to address.

Using eye-tracking data to improve weakly supervised video salient object detection
Exploring fixation information for better location and semantic guidance in VSOD
Enhancing spatiotemporal modeling under weak supervision with contrastive learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Position and Semantic Embedding module for guidance
Semantics and Locality Query Competitor design
Intra-Inter Mixed Contrastive learning paradigm
🔎 Similar Papers
No similar papers found.
Q
Qi Qin
Institute of Information Science, Beijing Jiaotong University, Beijing 100044, China and Beijing Key Laboratory of Advanced Information Science and Network Technology, Beijing 100044, China
R
Runmin Cong
School of Control Science and Engineering, Shandong University, Jinan 250061, China and Key Laboratory of Machine Intelligence and System Control, Ministry of Education, Jinan 250061, China
G
Gen Zhan
ByteDance China, Shenzhen 518000, China
Yiting Liao
Yiting Liao
Staff Research Scientist at Wireless Communications Research, Intel Labs
Video ProcessingVideo CommunicationsVideo Understanding
Sam Kwong
Sam Kwong
Lingnan Univerity, Hong Kong
Video CodingEvolutionary ComputationMachine Learning and pattern recognition