🤖 AI Summary
This study addresses the heterogeneity and dynamic evolution of experience-based preferences in individual travel decision-making. We propose the first Bayesian latent-class reinforcement learning framework that jointly models population heterogeneity and feedback-driven adaptive learning. Using variational Bayesian inference, the framework enables interpretable latent-class identification and robustly uncovers three distinct strategic patterns—context-dependent, persistent exploitation, and exploration-oriented—from driving simulator experimental data. The approach significantly enhances both predictive accuracy and interpretability in individual-level behavioral modeling. Crucially, it constitutes the first systematic integration of latent-class modeling with reinforcement learning under an experiential learning paradigm, thereby establishing a novel methodological foundation for analyzing the evolution of travel behavior.
📝 Abstract
Many travel decisions involve a degree of experience formation, where individuals learn their preferences over time. At the same time, there is extensive scope for heterogeneity across individual travellers, both in their underlying preferences and in how these evolve. The present paper puts forward a Latent Class Reinforcement Learning (LCRL) model that allows analysts to capture both of these phenomena. We apply the model to a driving simulator dataset and estimate the parameters through Variational Bayes. We identify three distinct classes of individuals that differ markedly in how they adapt their preferences: the first displays context-dependent preferences with context-specific exploitative tendencies; the second follows a persistent exploitative strategy regardless of context; and the third engages in an exploratory strategy combined with context-specific preferences.