LILAD: Learning In-context Lyapunov-stable Adaptive Dynamics Models

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In system identification, neural network models often struggle to simultaneously ensure stability and online adaptability. To address this, we propose a unified framework that jointly models dynamic systems and Lyapunov functions via contextual learning, incorporating a state-dependent attenuator to rigorously enforce the Lyapunov decrease condition. This ensures both stability and rapid adaptation under parameter uncertainty, out-of-distribution inputs, and non-stationary dynamics. Our approach is the first to deeply integrate stability constraints with context-driven dynamic modeling—eliminating the trade-offs or disjunctions inherent in prior methods. Evaluated on multiple autonomous systems benchmarks, our method consistently outperforms non-adaptive, robust, and state-of-the-art adaptive baselines, achieving significant improvements in both prediction accuracy and cross-task/out-of-distribution stability.

Technology Category

Application Category

📝 Abstract
System identification in control theory aims to approximate dynamical systems from trajectory data. While neural networks have demonstrated strong predictive accuracy, they often fail to preserve critical physical properties such as stability and typically assume stationary dynamics, limiting their applicability under distribution shifts. Existing approaches generally address either stability or adaptability in isolation, lacking a unified framework that ensures both. We propose LILAD (Learning In-Context Lyapunov-stable Adaptive Dynamics), a novel framework for system identification that jointly guarantees adaptability and stability. LILAD simultaneously learns a dynamics model and a Lyapunov function through in-context learning (ICL), explicitly accounting for parametric uncertainty. Trained across a diverse set of tasks, LILAD produces a stability-aware, adaptive dynamics model alongside an adaptive Lyapunov certificate. At test time, both components adapt to a new system instance using a short trajectory prompt, which enables fast generalization. To rigorously ensure stability, LILAD also computes a state-dependent attenuator that enforces a sufficient decrease condition on the Lyapunov function for any state in the new system instance. This mechanism extends stability guarantees even under out-of-distribution and out-of-task scenarios. We evaluate LILAD on benchmark autonomous systems and demonstrate that it outperforms adaptive, robust, and non-adaptive baselines in predictive accuracy.
Problem

Research questions and friction points this paper is trying to address.

Ensuring stability and adaptability in neural network-based system identification
Addressing distribution shifts and parametric uncertainty in dynamical systems
Providing unified framework for stability-aware adaptive dynamics models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jointly learns dynamics model and Lyapunov function via in-context learning
Adapts to new systems using short trajectory prompts for fast generalization
Enforces stability with state-dependent attenuator under distribution shifts
A
Amit Jena
Department of Electrical and Computer Engineering, Texas A&M University, USA
N
Na Li
Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, USA
Le Xie
Le Xie
Gordon McKay Professor of Electrical Engineering, Harvard University
Power Systems EconomicsData SciencesPublic PolicyArtificial Intelligence