Eye Gaze-Informed and Context-Aware Pedestrian Trajectory Prediction in Shared Spaces with Automated Shuttles: A Virtual Reality Study

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In shared spaces lacking explicit traffic rules, autonomous shuttles struggle to accurately predict pedestrian behavior, compromising both safety and operational efficiency. To address this challenge, this study develops a virtual reality experimental platform to collect fine-grained eye-tracking data, trajectory information, and contextual cues during pedestrian–vehicle interactions. The authors propose GazeX-LSTM, a novel model that, for the first time, demonstrates the irreplaceable contribution of gaze data to trajectory prediction and uncovers its complementary mechanism with contextual factors, yielding superadditive performance gains. Experimental results show that GazeX-LSTM significantly outperforms baseline methods relying solely on head orientation or trajectory history, establishing that integrating gaze behavior with contextual awareness substantially enhances prediction accuracy.

Technology Category

Application Category

📝 Abstract
The integration of Automated Shuttles into shared urban spaces presents unique challenges due to the absence of traffic rules and the complex pedestrian interactions. Accurately anticipating pedestrian behavior in such unstructured environments is therefore critical for ensuring both safety and efficiency. This paper presents a Virtual Reality (VR) study that captures how pedestrians interact with automated shuttles across diverse scenarios, including varying approach angles and navigating in continuous traffic. We identify critical behavior patterns present in pedestrians' decision-making in shared spaces, including hesitation, evasive maneuvers, gaze allocation, and proxemic adjustments. To model pedestrian behavior, we propose GazeX-LSTM, a multimodal eye gaze-informed and context-aware prediction model that integrates pedestrians' trajectories, fine-grained eye gaze dynamics, and contextual factors. We shift prediction from a vehicle- to a human-centered perspective by leveraging eye-tracking data to capture pedestrian attention. We systematically validate the unique and irreplaceable predictive power of eye gaze over head orientation alone, further enhancing performance by integrating contextual variables. Notably, the combination of eye gaze data and contextual information produces super-additive improvements on pedestrian behavior prediction accuracy, revealing the complementary relationship between visual attention and situational contexts. Together, our findings provide the first evidence that eye gaze-informed modeling fundamentally advances pedestrian behavior prediction and highlight the critical role of situational contexts in shared-space interactions. This paves the way for safer and more adaptive automated vehicle technologies that account for how people perceive and act in complex shared spaces.
Problem

Research questions and friction points this paper is trying to address.

pedestrian trajectory prediction
shared spaces
automated shuttles
eye gaze
context-aware
Innovation

Methods, ideas, or system contributions that make the work stand out.

eye gaze
context-aware
pedestrian trajectory prediction
virtual reality
GazeX-LSTM
🔎 Similar Papers
No similar papers found.