🤖 AI Summary
To address inconsistent annotation of driver hazardous actions (DHAs), high manual labeling costs, and data imbalance—leading to inaccurate causal attribution in traffic accident texts—this paper proposes a novel framework integrating fine-tuned large language models (LLMs) with counterfactual probabilistic reasoning. We adopt Llama 3.2-1B as the backbone and benchmark against classical baselines including Random Forest, XGBoost, CatBoost, and neural networks, trained and validated on a five-year corpus of two-vehicle accident narratives. The fine-tuned LLM achieves 80% classification accuracy, significantly outperforming all baselines. Crucially, our counterfactual analysis quantitatively uncovers previously unreported causal enhancement effects—for instance, distracted driving and teenage drivers amplify the probability of specific DHAs by up to 2.3×. The framework thus delivers both high-accuracy DHA classification and interpretable, causally grounded inference, establishing a new paradigm for traffic safety attribution analysis.
📝 Abstract
Vehicle crashes involve complex interactions between road users, split-second decisions, and challenging environmental conditions. Among these, two-vehicle crashes are the most prevalent, accounting for approximately 70% of roadway crashes and posing a significant challenge to traffic safety. Identifying Driver Hazardous Action (DHA) is essential for understanding crash causation, yet the reliability of DHA data in large-scale databases is limited by inconsistent and labor-intensive manual coding practices. Here, we present an innovative framework that leverages a fine-tuned large language model to automatically infer DHAs from textual crash narratives, thereby improving the validity and interpretability of DHA classifications. Using five years of two-vehicle crash data from MTCF, we fine-tuned the Llama 3.2 1B model on detailed crash narratives and benchmarked its performance against conventional machine learning classifiers, including Random Forest, XGBoost, CatBoost, and a neural network. The fine-tuned LLM achieved an overall accuracy of 80%, surpassing all baseline models and demonstrating pronounced improvements in scenarios with imbalanced data. To increase interpretability, we developed a probabilistic reasoning approach, analyzing model output shifts across original test sets and three targeted counterfactual scenarios: variations in driver distraction and age. Our analysis revealed that introducing distraction for one driver substantially increased the likelihood of "General Unsafe Driving"; distraction for both drivers maximized the probability of "Both Drivers Took Hazardous Actions"; and assigning a teen driver markedly elevated the probability of "Speed and Stopping Violations." Our framework and analytical methods provide a robust and interpretable solution for large-scale automated DHA detection, offering new opportunities for traffic safety analysis and intervention.