One Filters All: A Generalist Filter for State Estimation

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Optimal filtering—estimating the latent states of dynamic systems from noisy observations—is a classical challenge in control theory and signal processing. This paper introduces LLM-Filter, the first framework to repurpose frozen large language models (LLMs) as universal filters. It achieves cross-system generalization via text-based prototypical embedding of noisy measurements, a modality alignment mechanism, and a “System-as-Prompt” (SaP) design. Crucially, LLM-Filter abandons conventional parametric system modeling, instead representing dynamic behavior through semantic text embeddings—enabling robust state estimation in unseen environments. Experiments demonstrate that LLM-Filter outperforms existing learned filters on multiple canonical nonlinear systems. Moreover, its performance scales favorably with LLM size and training duration, validating the feasibility and promise of LLMs as foundational models for filtering.

Technology Category

Application Category

📝 Abstract
Estimating hidden states in dynamical systems, also known as optimal filtering, is a long-standing problem in various fields of science and engineering. In this paper, we introduce a general filtering framework, extbf{LLM-Filter}, which leverages large language models (LLMs) for state estimation by embedding noisy observations with text prototypes. In various experiments for classical dynamical systems, we find that first, state estimation can significantly benefit from the reasoning knowledge embedded in pre-trained LLMs. By achieving proper modality alignment with the frozen LLM, LLM-Filter outperforms the state-of-the-art learning-based approaches. Second, we carefully design the prompt structure, System-as-Prompt (SaP), incorporating task instructions that enable the LLM to understand the estimation tasks. Guided by these prompts, LLM-Filter exhibits exceptional generalization, capable of performing filtering tasks accurately in changed or even unseen environments. We further observe a scaling-law behavior in LLM-Filter, where accuracy improves with larger model sizes and longer training times. These findings make LLM-Filter a promising foundation model of filtering.
Problem

Research questions and friction points this paper is trying to address.

Developing a general filtering framework using LLMs for state estimation
Leveraging pre-trained LLMs' reasoning knowledge for dynamical systems
Creating a scalable filter that generalizes to unseen environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging LLMs for state estimation via text prototypes
Designing System-as-Prompt structure for task understanding
Achieving modality alignment with frozen LLM parameters
🔎 Similar Papers
No similar papers found.
S
Shiqi Liu
School of Vehicle and Mobility, Tsinghua University
Wenhan Cao
Wenhan Cao
Tsinghua University
State EstimationRobot Learning
C
Chang Liu
College of Engineering, Peking University
Zeyu He
Zeyu He
Ph.D. Student, Penn State University
Natural Language ProcessingHCICrowdsourcing
T
Tianyi Zhang
School of Vehicle and Mobility, Tsinghua University
S
Shengbo Eben Li
School of Vehicle and Mobility, Tsinghua University, College of AI, Tsinghua University