Human vs. AI Safety Perception? Decoding Human Safety Perception with Eye-Tracking Systems, Street View Images, and Explainable AI

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior studies predominantly evaluate perceived urban safety using large-scale street-view imagery, overlooking the critical visual factors that drive human attention and safety perception. Method: This study integrates eye-tracking data with four explainable AI (XAI) methods—XGradCAM, EigenCAM, MoRH, and MoH—to systematically compare human gaze patterns against AI-generated attention maps in safety perception tasks. Contribution/Results: We find that urban infrastructure and public space features strongly attract visual attention and positively influence perceived safety, whereas sky regions contribute minimally. XGradCAM and EigenCAM produce attention heatmaps most consistent with human gaze distributions. Critically, this work introduces the first cognitive-informed safety assessment framework grounded in empirical human attention evidence. By bridging human visual cognition and XAI, it establishes a novel paradigm for interpretable urban visual analytics, enabling more human-aligned and transparent safety evaluations.

Technology Category

Application Category

📝 Abstract
The way residents perceive safety plays an important role in how they use public spaces. Studies have combined large-scale street view images and advanced computer vision techniques to measure the perception of safety of urban environments. Despite their success, such studies have often overlooked the specific environmental visual factors that draw human attention and trigger people's feelings of safety perceptions. In this study, we introduce a computational framework that enriches the existing body of literature on place perception by using eye-tracking systems with street view images and deep learning approaches. Eye-tracking systems quantify not only what users are looking at but also how long they engage with specific environmental elements. This allows us to explore the nuance of which visual environmental factors influence human safety perceptions. We conducted our research in Helsingborg, Sweden, where we recruited volunteers outfitted with eye-tracking systems. They were asked to indicate which of the two street view images appeared safer. By examining participants' focus on specific features using Mean Object Ratio in Highlighted Regions (MoRH) and Mean Object Hue (MoH), we identified key visual elements that attract human attention when perceiving safe environments. For instance, certain urban infrastructure and public space features draw more human attention while the sky is less relevant in influencing safety perceptions. These insights offer a more human-centered understanding of which urban features influence human safety perceptions. Furthermore, we compared the real human attention from eye-tracking systems with attention maps obtained from eXplainable Artificial Intelligence (XAI) results. Several XAI models were tested, and we observed that XGradCAM and EigenCAM most closely align with human safety perceptual patterns.
Problem

Research questions and friction points this paper is trying to address.

Identifying visual environmental factors influencing human safety perception
Comparing human attention patterns with AI-generated attention maps
Quantifying human gaze behavior using eye-tracking and street view images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Eye-tracking systems quantify human visual attention duration
Deep learning analyzes street view images for safety perception
Explainable AI models align with human perceptual patterns
🔎 Similar Papers
No similar papers found.
Yuhao Kang
Yuhao Kang
Assistant Professor, GISense Lab, The University of Texas at Austin & GISphere
GIScienceGeospatial Data ScienceGeoAICartographyUrban Visual Intelligence
J
Junda Chen
Department of Computer Science and Engineering, University of California San Diego, San Diego, CA, United States
L
Liu Liu
Senseable City Lab, Department of Urban Studies and Planning, Massachusetts Institute of Technology, Cambridge, MA, United States
Kshitij Sharma
Kshitij Sharma
Senseable City Lab, Department of Urban Studies and Planning, Massachusetts Institute of Technology, Cambridge, MA, United States; Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
M
Martina Mazzarello
Senseable City Lab, Department of Urban Studies and Planning, Massachusetts Institute of Technology, Cambridge, MA, United States
S
Simone Mora
Senseable City Lab, Department of Urban Studies and Planning, Massachusetts Institute of Technology, Cambridge, MA, United States; Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
F
Fabio Duarte
Senseable City Lab, Department of Urban Studies and Planning, Massachusetts Institute of Technology, Cambridge, MA, United States
Carlo Ratti
Carlo Ratti
Professor, Senseable City Lab, Department of Urban Studies and Planning, MIT
Urban StudiesCitiesUrban MobilityUrban ComputingUrban design