🤖 AI Summary
This study investigates differences in radiologists’ eye movement behaviors when viewing real versus AI-generated medical images, aiming to uncover shifts in visual attention allocation and diagnostic strategies. Method: High-precision eye-tracking technology was employed to quantify fixation patterns (first/last/short/long fixations), saccade trajectories (direction and amplitude), and visual saliency maps, followed by construction of a joint distribution statistical model. Contribution/Results: The study provides the first systematic evidence that radiologists exhibit significant gaze deviations—such as delayed first fixations and dispersed last fixations—and atypical saccade patterns when interpreting AI-generated images, indicating measurable alterations in diagnostic cognition. These findings reveal a previously unrecognized cognitive vulnerability undermining clinical trust in AI-generated medical imagery. Moreover, they establish an empirical foundation and methodological framework for a novel human-perception–grounded evaluation paradigm for AI medical imaging.
📝 Abstract
Eye-tracking analysis plays a vital role in medical imaging, providing key insights into how radiologists visually interpret and diagnose clinical cases. In this work, we first analyze radiologists' attention and agreement by measuring the distribution of various eye-movement patterns, including saccades direction, amplitude, and their joint distribution. These metrics help uncover patterns in attention allocation and diagnostic strategies. Furthermore, we investigate whether and how doctors' gaze behavior shifts when viewing authentic (Real) versus deep-learning-generated (Fake) images. To achieve this, we examine fixation bias maps, focusing on first, last, short, and longest fixations independently, along with detailed saccades patterns, to quantify differences in gaze distribution and visual saliency between authentic and synthetic images.