DiffEye: Diffusion-Based Continuous Eye-Tracking Data Generation Conditioned on Natural Images

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing eye movement generation models predominantly rely on discrete scanpaths, neglecting the richness of raw continuous trajectories and inter-subject variability, while failing to capture the inherent stochasticity and diversity of natural visual attention. This work introduces the first diffusion-based framework for generating continuous eye movement trajectories conditioned on natural images. We propose a Corresponding Position Encoding (CPE) to explicitly align visual semantic features with spatial gaze distributions and model individual differences. Our method enables end-to-end generation of continuous trajectories, discrete scanpaths, and saliency maps. Despite limited training data, it achieves state-of-the-art performance on discrete scanpath prediction and—uniquely—enables high-fidelity continuous trajectory synthesis. The generated trajectories more accurately reflect the spatiotemporal dynamics of human visual attention and inter-individual heterogeneity.

Technology Category

Application Category

📝 Abstract
Numerous models have been developed for scanpath and saliency prediction, which are typically trained on scanpaths, which model eye movement as a sequence of discrete fixation points connected by saccades, while the rich information contained in the raw trajectories is often discarded. Moreover, most existing approaches fail to capture the variability observed among human subjects viewing the same image. They generally predict a single scanpath of fixed, pre-defined length, which conflicts with the inherent diversity and stochastic nature of real-world visual attention. To address these challenges, we propose DiffEye, a diffusion-based training framework designed to model continuous and diverse eye movement trajectories during free viewing of natural images. Our method builds on a diffusion model conditioned on visual stimuli and introduces a novel component, namely Corresponding Positional Embedding (CPE), which aligns spatial gaze information with the patch-based semantic features of the visual input. By leveraging raw eye-tracking trajectories rather than relying on scanpaths, DiffEye captures the inherent variability in human gaze behavior and generates high-quality, realistic eye movement patterns, despite being trained on a comparatively small dataset. The generated trajectories can also be converted into scanpaths and saliency maps, resulting in outputs that more accurately reflect the distribution of human visual attention. DiffEye is the first method to tackle this task on natural images using a diffusion model while fully leveraging the richness of raw eye-tracking data. Our extensive evaluation shows that DiffEye not only achieves state-of-the-art performance in scanpath generation but also enables, for the first time, the generation of continuous eye movement trajectories. Project webpage: https://diff-eye.github.io/
Problem

Research questions and friction points this paper is trying to address.

Modeling continuous eye movement trajectories from natural images
Capturing variability in human gaze behavior across subjects
Generating realistic eye movements using raw tracking data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion model for continuous eye movement generation
Corresponding Positional Embedding aligns gaze with semantics
Uses raw eye-tracking trajectories instead of scanpaths
🔎 Similar Papers
No similar papers found.