🤖 AI Summary
Infinite Hidden Markov Models (iHMMs) lack principled initialization strategies, and the commonly adopted uniform initialization is suboptimal. Method: We systematically evaluate finite-HMM–inspired initialization schemes in the infinite-state setting and propose a distance-based clustering initialization. Coupled with a hierarchical Dirichlet process prior and an enhanced beam sampler—integrating dynamic programming and slice sampling—our approach enables adaptive truncation and efficient Bayesian inference. Contribution/Results: Experiments on synthetic and real-world time-series data demonstrate that our initialization significantly improves both accuracy in detecting structural changes and convergence speed, outperforming uniform initialization and other baselines. This work provides the first empirically validated, initialization-optimization framework for practical iHMM deployment.
📝 Abstract
Infinite hidden Markov models provide a flexible framework for modelling time series with structural changes and complex dynamics, without requiring the number of latent states to be specified in advance. This flexibility is achieved through the hierarchical Dirichlet process prior, while efficient Bayesian inference is enabled by the beam sampler, which combines dynamic programming with slice sampling to truncate the infinite state space adaptively. Despite extensive methodological developments, the role of initialization in this framework has received limited attention. This study addresses this gap by systematically evaluating initialization strategies commonly used for finite hidden Markov models and assessing their suitability in the infinite setting. Results from both simulated and real datasets show that distance-based clustering initializations consistently outperform model-based and uniform alternatives, the latter being the most widely adopted in the existing literature.