State Space Model Programming in Turing.jl

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing state-space model (SSM) frameworks lack composability and scalability, hindering rapid model experimentation and integration of advanced inference techniques. Method: We introduce the first modular SSM programming framework built on the Julia ecosystem, decoupling model structure from inference algorithms to enable unified modeling and inference across linear Gaussian, nonlinear, and non-Gaussian systems. The framework integrates Turing.jl, SSMProblems.jl, and GeneralisedFilters.jl; introduces a novel dynamic scheduling mechanism for hybrid filtering (e.g., Kalman + particle filters); and incorporates CUDA-accelerated GPU execution with memory optimization. Contribution/Results: Experiments demonstrate substantially reduced development complexity—enabling real-time inference on million-scale time-series data—while improving code reuse by over 3× and achieving an 8.2× throughput gain for GPU-based inference over CPU counterparts.

Technology Category

Application Category

📝 Abstract
State space models (SSMs) are a powerful and widely-used class of probabilistic models for analysing time-series data across various fields, from econometrics to robotics. Despite their prevalence, existing software frameworks for SSMs often lack compositionality and scalability, hindering experimentation and making it difficult to leverage advanced inference techniques. This paper introduces SSMProblems.jl and GeneralisedFilters.jl, two Julia packages within the Turing.jl ecosystem, that address this challenge by providing a consistent, composable, and general framework for defining SSMs and performing inference on them. This unified interface allows researchers to easily define a wide range of SSMs and apply various inference algorithms, including Kalman filtering, particle filtering, and combinations thereof. By promoting code reuse and modularity, our packages reduce development time and improve the reliability of SSM implementations. We prioritise scalability through efficient memory management and GPU-acceleration, ensuring that our framework can handle large-scale inference tasks.
Problem

Research questions and friction points this paper is trying to address.

Lack of composable, scalable frameworks for state space models
Difficulty in applying advanced inference techniques to SSMs
Need for efficient, modular SSM implementations in time-series analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Composable framework for state space models
Unified interface for multiple inference algorithms
Scalable with GPU-acceleration and memory management
🔎 Similar Papers
2024-05-27Neural Information Processing SystemsCitations: 24
2024-04-12International Conference on Machine LearningCitations: 35
T
Tim Hargreaves
Department of Engineering, University of Cambridge, UK
Q
Qing Li
Department of Engineering, University of Cambridge, UK
C
Charles Knipp
Federal Reserve Board of Governors, USA
F
Frédéric Wantiez
S
Simon J. Godsill
Department of Engineering, University of Cambridge, UK
Hong Ge
Hong Ge
Cambridge University
Bayesian InferenceMonte CarloMachine LearningArtificial Intelligence