Understanding Pedestrian Gesture Misrecognition: Insights from Vision-Language Model Reasoning

πŸ“… 2025-08-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Pedestrian hand gestures exhibit semantic ambiguity and strong contextual dependency in human-vehicle interaction, posing significant challenges for autonomous vehicles (AVs) in terms of recognition accuracy and interpretability. To address this, we innovatively employ GPT-4Vβ€”a multimodal vision-language modelβ€”not as a conventional evaluator but as a diagnostic tool, complemented by manual annotation and thematic analysis of videos from public datasets. Our systematic error analysis identifies four critical causal factors: gesture visibility, temporal dynamics of behavior, interaction intent, and environmental interference. Based on these findings, we propose design principles grounded in contextual redundancy and visual saliency to enhance gesture recognizability. This work advances an uncertainty-aware gesture understanding framework, offering an interpretable pathway to improve AV interaction robustness; the methodology is further generalizable to augmented reality and assistive technologies.

Technology Category

Application Category

πŸ“ Abstract
Pedestrian gestures play an important role in traffic communication, particularly in interactions with autonomous vehicles (AVs), yet their subtle, ambiguous, and context-dependent nature poses persistent challenges for machine interpretation. This study investigates these challenges by using GPT-4V, a vision-language model, not as a performance benchmark but as a diagnostic tool to reveal patterns and causes of gesture misrecognition. We analysed a public dataset of pedestrian-vehicle interactions, combining manual video review with thematic analysis of the model's qualitative reasoning. This dual approach surfaced recurring factors influencing misrecognition, including gesture visibility, pedestrian behaviour, interaction context, and environmental conditions. The findings suggest practical considerations for gesture design, including the value of salience and contextual redundancy, and highlight opportunities to improve AV recognition systems through richer context modelling and uncertainty-aware interpretations. While centred on AV-pedestrian interaction, the method and insights are applicable to other domains where machines interpret human gestures, such as wearable AR and assistive technologies.
Problem

Research questions and friction points this paper is trying to address.

Analyzing causes of pedestrian gesture misrecognition by autonomous vehicles
Investigating gesture visibility and context challenges using GPT-4V
Improving AV recognition through context modeling and gesture design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using GPT-4V as diagnostic tool
Combining manual review with thematic analysis
Improving AV recognition via context modeling
πŸ”Ž Similar Papers