Non-monotonic causal discovery with Kolmogorov-Arnold Fuzzy Cognitive Maps

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional fuzzy cognitive maps (FCMs) struggle to model non-monotonic causal relationships due to their reliance on scalar edge weights and monotonic activation functions, rendering them incapable of capturing saturation effects or periodic dynamics. To address this limitation, this work proposes the Kolmogorov–Arnold Fuzzy Cognitive Map (KA-FCM), which for the first time integrates the Kolmogorov–Arnold representation theorem into the FCM framework. By replacing static scalar weights with learnable univariate B-spline functions, KA-FCM shifts nonlinearity into the causal influence stage, enabling direct modeling of arbitrary non-monotonic dependencies without increasing graph density or introducing hidden layers. The approach achieves high accuracy while preserving interpretability. Empirical results on Yerkes–Dodson law reasoning, symbolic regression, and chaotic time series prediction demonstrate that KA-FCM significantly outperforms conventional FCMs, matching the performance of multilayer perceptrons and allowing explicit extraction of underlying mathematical laws.
📝 Abstract
Fuzzy Cognitive Maps constitute a neuro-symbolic paradigm for modeling complex dynamic systems, widely adopted for their inherent interpretability and recurrent inference capabilities. However, the standard FCM formulation, characterized by scalar synaptic weights and monotonic activation functions, is fundamentally constrained in modeling non-monotonic causal dependencies, thereby limiting its efficacy in systems governed by saturation effects or periodic dynamics. To overcome this topological restriction, this research proposes the Kolmogorov-Arnold Fuzzy Cognitive Map (KA-FCM), a novel architecture that redefines the causal transmission mechanism. Drawing upon the Kolmogorov-Arnold representation theorem, static scalar weights are replaced with learnable, univariate B-spline functions located on the model edges. This fundamental modification shifts the non-linearity from the nodes' aggregation phase directly to the causal influence phase. This modification allows for the modeling of arbitrary, non-monotonic causal relationships without increasing the graph density or introducing hidden layers. The proposed architecture is validated against both baselines (standard FCM trained with Particle Swarm Optimization) and universal black-box approximators (Multi-Layer Perceptron) across three distinct domains: non-monotonic inference (Yerkes-Dodson law), symbolic regression, and chaotic time-series forecasting. Experimental results demonstrate that KA-FCMs significantly outperform conventional architectures and achieve competitive accuracy relative to MLPs, while preserving graph- based interpretability and enabling the explicit extraction of mathematical laws from the learned edges.
Problem

Research questions and friction points this paper is trying to address.

non-monotonic causal discovery
Fuzzy Cognitive Maps
causal dependencies
Kolmogorov-Arnold representation
interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kolmogorov-Arnold representation
Fuzzy Cognitive Maps
non-monotonic causal discovery
B-spline functions
interpretable AI
🔎 Similar Papers
No similar papers found.