When Context Is Not Enough: Modeling Unexplained Variability in Car-Following Behavior

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional deterministic car-following models fail to capture structured stochasticity arising from latent factors—such as driver intent, perceptual errors, and memory effects—leading to unquantifiable residual uncertainty when relying solely on observable contextual variables (e.g., spacing, speed). To address this, we propose an interpretable stochastic modeling framework that integrates a scene-adaptive Gibbs-kernel Gaussian process with a deep neural network, explicitly capturing non-stationary temporal dependencies in acceleration decisions and enabling data-driven uncertainty quantification. Experiments on the HighD dataset demonstrate that our method significantly outperforms classical deterministic models and state-of-the-art stochastic approaches in both predictive accuracy and uncertainty calibration. This advancement enhances the fidelity and interpretability of microscopic traffic simulation for complex human driving behaviors.

Technology Category

Application Category

📝 Abstract
Modeling car-following behavior is fundamental to microscopic traffic simulation, yet traditional deterministic models often fail to capture the full extent of variability and unpredictability in human driving. While many modern approaches incorporate context-aware inputs (e.g., spacing, speed, relative speed), they frequently overlook structured stochasticity that arises from latent driver intentions, perception errors, and memory effects -- factors that are not directly observable from context alone. To fill the gap, this study introduces an interpretable stochastic modeling framework that captures not only context-dependent dynamics but also residual variability beyond what context can explain. Leveraging deep neural networks integrated with nonstationary Gaussian processes (GPs), our model employs a scenario-adaptive Gibbs kernel to learn dynamic temporal correlations in acceleration decisions, where the strength and duration of correlations between acceleration decisions evolve with the driving context. This formulation enables a principled, data-driven quantification of uncertainty in acceleration, speed, and spacing, grounded in both observable context and latent behavioral variability. Comprehensive experiments on the naturalistic vehicle trajectory dataset collected from the German highway, i.e., the HighD dataset, demonstrate that the proposed stochastic simulation method within this framework surpasses conventional methods in both predictive performance and interpretable uncertainty quantification. The integration of interpretability and accuracy makes this framework a promising tool for traffic analysis and safety-critical applications.
Problem

Research questions and friction points this paper is trying to address.

Modeling unexplained variability in car-following behavior
Capturing latent driver intentions and perception errors
Improving predictive performance and uncertainty quantification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates deep neural networks with Gaussian processes
Uses scenario-adaptive Gibbs kernel for temporal correlations
Combines observable context with latent variability
🔎 Similar Papers
No similar papers found.
Chengyuan Zhang
Chengyuan Zhang
湖南大学
机器学习 深度学习 数据挖掘 多媒体数据处理
Zhengbing He
Zhengbing He
MIT
traffic flow theoryautonomous drivingurban mobilitymachine learning
C
Cathy Wu
Laboratory for Information & Decision Systems (LIDS), Massachusetts Institute of Technology, Cambridge, MA 02139, USA
L
Lijun Sun
Department of Civil Engineering, McGill University, 817 Sherbrooke Street West, Montreal, QC H3A 0C3, Canada