π€ AI Summary
Pedestrian hand gestures exhibit semantic ambiguity and strong contextual dependency in human-vehicle interaction, posing significant challenges for autonomous vehicles (AVs) in terms of recognition accuracy and interpretability. To address this, we innovatively employ GPT-4Vβa multimodal vision-language modelβnot as a conventional evaluator but as a diagnostic tool, complemented by manual annotation and thematic analysis of videos from public datasets. Our systematic error analysis identifies four critical causal factors: gesture visibility, temporal dynamics of behavior, interaction intent, and environmental interference. Based on these findings, we propose design principles grounded in contextual redundancy and visual saliency to enhance gesture recognizability. This work advances an uncertainty-aware gesture understanding framework, offering an interpretable pathway to improve AV interaction robustness; the methodology is further generalizable to augmented reality and assistive technologies.
π Abstract
Pedestrian gestures play an important role in traffic communication, particularly in interactions with autonomous vehicles (AVs), yet their subtle, ambiguous, and context-dependent nature poses persistent challenges for machine interpretation. This study investigates these challenges by using GPT-4V, a vision-language model, not as a performance benchmark but as a diagnostic tool to reveal patterns and causes of gesture misrecognition. We analysed a public dataset of pedestrian-vehicle interactions, combining manual video review with thematic analysis of the model's qualitative reasoning. This dual approach surfaced recurring factors influencing misrecognition, including gesture visibility, pedestrian behaviour, interaction context, and environmental conditions. The findings suggest practical considerations for gesture design, including the value of salience and contextual redundancy, and highlight opportunities to improve AV recognition systems through richer context modelling and uncertainty-aware interpretations. While centred on AV-pedestrian interaction, the method and insights are applicable to other domains where machines interpret human gestures, such as wearable AR and assistive technologies.