🤖 AI Summary
This study addresses the challenge of predicting driver yielding behavior at unsignalized intersections to enhance pedestrian safety. To overcome difficulties in fine-grained modeling of interactive behaviors and achieving interpretable decision-making, we propose a novel multimodal prompting mechanism that integrates traffic-domain knowledge, structured reasoning chains, and few-shot prompting—thereby enhancing large language models’ (LLMs) understanding of dynamic traffic contexts. Leveraging GPT-4o and Deepseek-V3, our framework fuses vision-language inputs with domain-constrained prompts to enable context-aware behavioral inference. Experimental results show GPT-4o achieves the highest accuracy (89.2%) and recall (85.7%), while Deepseek-V3 attains the best precision (91.4%), highlighting a trade-off between accuracy and computational efficiency. Our key contribution is the first knowledge-augmented, interpretable LLM-based prediction framework specifically designed for human-vehicle interaction at unsignalized intersections.
📝 Abstract
Pedestrian safety is a critical component of urban mobility and is strongly influenced by the interactions between pedestrian decision-making and driver yielding behavior at crosswalks. Modeling driver--pedestrian interactions at intersections requires accurately capturing the complexity of these behaviors. Traditional machine learning models often struggle to capture the nuanced and context-dependent reasoning required for these multifactorial interactions, due to their reliance on fixed feature representations and limited interpretability. In contrast, large language models (LLMs) are suited for extracting patterns from heterogeneous traffic data, enabling accurate modeling of driver-pedestrian interactions. Therefore, this paper leverages multimodal LLMs through a novel prompt design that incorporates domain-specific knowledge, structured reasoning, and few-shot prompting, enabling interpretable and context-aware inference of driver yielding behavior, as an example application of modeling pedestrian--driver interaction. We benchmarked state-of-the-art LLMs against traditional classifiers, finding that GPT-4o consistently achieves the highest accuracy and recall, while Deepseek-V3 excels in precision. These findings highlight the critical trade-offs between model performance and computational efficiency, offering practical guidance for deploying LLMs in real-world pedestrian safety systems.