🤖 AI Summary
This work addresses the problem of efficiently verifying whether Kolmogorov–Arnold Networks (KANs) satisfy specified output properties under given inputs. To this end, the authors propose a verification framework based on piecewise-affine (PWA) abstractions, in which the nonlinear activation functions in KANs are approximated by PWA functions, thereby transforming the verification problem into a mixed-integer linear programming (MILP) formulation. The approach innovatively integrates dynamic programming with knapsack optimization to automatically determine the optimal number of segments for each activation unit, minimizing abstraction complexity while guaranteeing a global error bound. Experimental results demonstrate that the method significantly enhances verification efficacy across multiple KAN benchmarks, achieving tighter and decidable property verification with controllable preprocessing overhead.
📝 Abstract
We present a novel approach for verifying properties of Kolmogorov-Arnold Networks (KANs), a class of neural networks characterized by nonlinear, univariate activation functions typically implemented as piecewise polynomial splines or Gaussian processes. Our method creates mathematical ``abstractions''by replacing each KAN unit with a piecewise affine (PWA) function, providing both local and global error estimates between the original network and its approximation. These abstractions enable property verification by encoding the problem as a Mixed Integer Linear Program (MILP), determining whether outputs satisfy specified properties when inputs belong to a given set. A critical challenge lies in balancing the number of pieces in the PWA approximation: too many pieces add binary variables that make verification computationally intractable, while too few pieces create excessive error margins that yield uninformative bounds. Our key contribution is a systematic framework that exploits KAN structure to find optimal abstractions. By combining dynamic programming at the unit level with a knapsack optimization across the network, we minimize the total number of pieces while guaranteeing specified error bounds. This approach determines the optimal approximation strategy for each unit while maintaining overall accuracy requirements. Empirical evaluation across multiple KAN benchmarks demonstrates that the upfront analysis costs of our method are justified by superior verification results.