🤖 AI Summary
This study addresses the challenge of achieving efficient and accurate state estimation in nonlinear dynamic systems under unknown system dynamics and noise models. To this end, it presents the first systematic comparison of model-free deep learning approaches—including Transformers, state space models (SSMs), and recurrent neural networks—against classical filtering methods such as particle filters and extended/unscented Kalman filters. Experimental results demonstrate that state space neural networks, without requiring any explicit system model, attain estimation accuracy approaching that of strong nonlinear Kalman filters while significantly outperforming weaker baselines. Moreover, these neural architectures achieve substantially higher inference throughput, thereby offering a compelling balance between accuracy and computational efficiency.
📝 Abstract
Neural network models are increasingly used for state estimation in control and decision-making problems, yet it remains unclear to what extent they behave as principled filters in nonlinear dynamical systems. Unlike classical filters, which rely on explicit knowledge of system dynamics and noise models, neural estimators can be trained purely from data without access to the underlying system equations. In this work, we present a systematic empirical comparison between such model-free neural network models and classical filtering methods across multiple nonlinear scenarios. Our study evaluates Transformer-based models, state-space neural networks, and recurrent architectures alongside particle filters and nonlinear Kalman filters. The results show that neural models (in particular, state-space models (SSMs)) achieve state estimation performance that approaches strong nonlinear Kalman filters in nonlinear scenarios and outperform weaker classical baselines despite lacking access to system models, while also attaining substantially higher inference throughput.