🤖 AI Summary
This work addresses the heavy reliance on human expertise in algorithm discovery for scientific computing. We propose an LLM-guided Cartesian Genetic Programming (CGP) co-evolutionary framework for data-driven automatic discovery of Kalman filters. Methodologically, the LLM injects semantic priors and structural constraints, while CGP evolves interpretable computational graphs of filters under data-driven feedback; the two components jointly optimize both algorithmic structure and parameters. Our contributions are threefold: (1) the first deep integration of LLMs and CGP for automated reconstruction of classical estimation algorithms; (2) guaranteed convergence to near-optimal solutions when the Kalman optimality assumptions hold, and—when they fail—evolution of novel, high-performing, and structurally transparent filter variants; and (3) generation of algorithms that are simultaneously interpretable, robust, and amenable to scientific validation, thereby establishing a new paradigm for “algorithm discovery” in scientific computing.
📝 Abstract
Algorithmic discovery has traditionally relied on human ingenuity and extensive experimentation. Here we investigate whether a prominent scientific computing algorithm, the Kalman Filter, can be discovered through an automated, data-driven, evolutionary process that relies on Cartesian Genetic Programming (CGP) and Large Language Models (LLM). We evaluate the contributions of both modalities (CGP and LLM) in discovering the Kalman filter under varying conditions. Our results demonstrate that our framework of CGP and LLM-assisted evolution converges to near-optimal solutions when Kalman optimality assumptions hold. When these assumptions are violated, our framework evolves interpretable alternatives that outperform the Kalman filter. These results demonstrate that combining evolutionary algorithms and generative models for interpretable, data-driven synthesis of simple computational modules is a potent approach for algorithmic discovery in scientific computing.