🤖 AI Summary
This work investigates the theoretical performance and practical efficacy of federated learning for system identification in linearly parameterized nonlinear dynamical systems. We address multi-client collaborative modeling of physical systems—e.g., inverted pendulums and quadcopters—under passive excitation, i.i.d. control inputs, and stochastic disturbances. We propose a federated identification framework leveraging real-analytic feature mappings (e.g., polynomial or trigonometric bases), ensuring convergence without active exploration. Our key contribution is a novel theoretical characterization of how nonlinear features φ enhance persistent excitation, together with a rigorous proof that increasing the number of clients accelerates global convergence—thereby overcoming the conventional communication- and computation-bound performance bottlenecks in federated learning. Extensive experiments demonstrate stable, scalable performance gains across varying noise levels and heterogeneous data distributions.
📝 Abstract
We consider federated learning of linearly-parameterized nonlinear systems. We establish theoretical guarantees on the effectiveness of federated nonlinear system identification compared to centralized approaches, demonstrating that the convergence rate improves as the number of clients increases. Although the convergence rates in the linear and nonlinear cases differ only by a constant, this constant depends on the feature map $φ$, which can be carefully chosen in the nonlinear setting to increase excitation and improve performance. We experimentally validate our theory in physical settings where client devices are driven by i.i.d. control inputs and control policies exhibiting i.i.d. random perturbations, ensuring non-active exploration. Experiments use trajectories from nonlinear dynamical systems characterized by real-analytic feature functions, including polynomial and trigonometric components, representative of physical systems including pendulum and quadrotor dynamics. We analyze the convergence behavior of the proposed method under varying noise levels and data distributions. Results show that federated learning consistently improves convergence of any individual client as the number of participating clients increases.