HyperKKL: Learning KKL Observers for Non-Autonomous Nonlinear Systems via Hypernetwork-Based Input Conditioning

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing learning-based Kazantzis–Kravaris/Luenberger (KKL) observers, which are restricted to autonomous systems and struggle with state estimation for non-autonomous nonlinear systems driven by external inputs. To overcome this, the paper introduces hypernetworks into neural KKL observer design, proposing two input-conditioning strategies: first, incorporating input-dependent correction terms into the latent dynamics, and second, employing hypernetworks to generate input-modulated encoder and decoder weights that enable time-varying mappings. The approach integrates neural differential equations with a left-inverse mapping for state reconstruction and provides a theoretical worst-case bound on the state estimation error. Experimental results on four nonlinear benchmark systems demonstrate that the proposed method reduces the average SMAPE by 29% under non-zero inputs compared to static, autonomous mappings.
📝 Abstract
Kazantzis-Kravaris/Luenberger (KKL) observers are a class of state observers for nonlinear systems that rely on an injective map to transform the nonlinear dynamics into a stable quasi-linear latent space, from where the state estimate is obtained in the original coordinates via a left inverse of the transformation map. Current learning-based methods for these maps are designed exclusively for autonomous systems and do not generalize well to controlled or non-autonomous systems. In this paper, we propose two learning-based designs of neural KKL observers for non-autonomous systems whose dynamics are influenced by exogenous inputs. To this end, a hypernetwork-based framework ($HyperKKL$) is proposed with two input-conditioning strategies. First, an augmented observer approach ($HyperKKL_{obs}$) adds input-dependent corrections to the latent observer dynamics while retaining static transformation maps. Second, a dynamic observer approach ($HyperKKL_{dyn}$) employs a hypernetwork to generate encoder and decoder weights that are input-dependent, yielding time-varying transformation maps. We derive a theoretical worst-case bound on the state estimation error. Numerical evaluations on four nonlinear benchmark systems show that input conditioning yields consistent improvements in estimation accuracy over static autonomous maps, with an average symmetric mean absolute percentage error (SMAPE) reduction of 29% across all non-zero input regimes.
Problem

Research questions and friction points this paper is trying to address.

non-autonomous nonlinear systems
KKL observers
state estimation
exogenous inputs
input conditioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hypernetwork
KKL observer
non-autonomous systems
input conditioning
neural state estimation
Y
Yahia Salaheldin Shaaban
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi, UAE
Abdelrahman Sayed Sayed
Abdelrahman Sayed Sayed
PhD student @ Université Gustave Eiffel - ESTAS
AIAutonomous SystemsControl Theory & ApplicationsFormal MethodsMarine Robotics
M. Umar B. Niazi
M. Umar B. Niazi
KTH Royal Institute of Technology
Control theoryNetworked systemsGame theory
K
Karl Henrik Johansson
Department of Decision and Control Systems and Digital Futures, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden