A Vision-and-Knowledge Enhanced Large Language Model for Generalizable Pedestrian Crossing Behavior Inference

📅 2026-01-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for inferring pedestrian crossing behavior exhibit limited generalization in unseen scenarios. This work proposes PedX-LLM, the first approach to inject visual context and traffic-domain knowledge into a large language model, leveraging LLaMA-2-7B fine-tuned with LoRA to enable multimodal semantic reasoning and shift the paradigm from data-driven fitting toward human-like logical inference. The model employs LLaVA to extract visual features and fuses them with textual and domain-specific knowledge, achieving a balanced accuracy of 82.0% on known scenes. On five unseen test locations, it attains a zero-shot accuracy of 66.9%, outperforming baseline methods by at least 18 percentage points; with just five examples for fine-tuning, performance further improves to 72.2%.

Technology Category

Application Category

📝 Abstract
Existing paradigms for inferring pedestrian crossing behavior, ranging from statistical models to supervised learning methods, demonstrate limited generalizability and perform inadequately on new sites. Recent advances in Large Language Models (LLMs) offer a shift from numerical pattern fitting to semantic, context-aware behavioral reasoning, yet existing LLM applications lack domain-specific adaptation and visual context. This study introduces Pedestrian Crossing LLM (PedX-LLM), a vision-and-knowledge enhanced framework designed to transform pedestrian crossing inference from site-specific pattern recognition to generalizable behavioral reasoning. By integrating LLaVA-extracted visual features with textual data and transportation domain knowledge, PedX-LLM fine-tunes a LLaMA-2-7B foundation model via Low-Rank Adaptation (LoRA) to infer crossing decisions. PedX-LLM achieves 82.0% balanced accuracy, outperforming the best statistical and supervised learning methods. Results demonstrate that the vision-augmented module contributes a 2.9% performance gain by capturing the built environment and integrating domain knowledge yields an additional 4.1% improvement. To evaluate generalizability across unseen environments, cross-site validation was conducted using site-based partitioning. The zero-shot PedX-LLM configuration achieves 66.9% balanced accuracy on five unseen test sites, outperforming the baseline data-driven methods by at least 18 percentage points. Incorporating just five validation examples via few-shot learning to PedX-LLM further elevates the balanced accuracy to 72.2%. PedX-LLM demonstrates strong generalizability to unseen scenarios, confirming that vision-and-knowledge-enhanced reasoning enables the model to mimic human-like decision logic and overcome the limitations of purely data-driven methods.
Problem

Research questions and friction points this paper is trying to address.

pedestrian crossing behavior
generalizability
vision-and-knowledge enhancement
large language models
cross-site validation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Model
Vision-Language Integration
Domain Knowledge Enhancement
Generalizable Behavior Inference
Low-Rank Adaptation
🔎 Similar Papers
No similar papers found.
Q
Qingwen Pu
Transportation Informatics Lab, Department of Civil and Environmental Engineering, Old Dominion University, Norfolk, VA 23529, United States
Kun Xie
Kun Xie
Associate Professor, Old Dominion University
Transportation SafetyTransportation ResilienceAIConnected and Autonomous Vehicles
H
Hong Yang
Department of Electrical and Computer Engineering, Old Dominion University, Norfolk, VA, United States
Guocong Zhai
Guocong Zhai
National University of Singapore
Transportation SafetyMobility BehaviorCausal InferenceGenerative AI