ECSEL: Explainable Classification via Signomial Equation Learning

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the longstanding challenge in interpretable classification of balancing predictive performance with structural transparency. To this end, the authors propose a novel approach based on learning positive monomial equations, which—by leveraging symbolic regression for the first time in this context—yields closed-form expressions that simultaneously achieve high classification accuracy and global interpretability. The resulting models support analytical decision boundary inspection, local feature attribution, and counterfactual reasoning. Empirical evaluations demonstrate that the method outperforms existing symbolic regression techniques on standard benchmarks while offering superior computational efficiency. Moreover, in real-world applications such as e-commerce and fraud detection, it maintains competitive accuracy while uncovering data biases and delivering actionable insights.

Technology Category

Application Category

📝 Abstract
We introduce ECSEL, an explainable classification method that learns formal expressions in the form of signomial equations, motivated by the observation that many symbolic regression benchmarks admit compact signomial structure. ECSEL directly constructs a structural, closed-form expression that serves as both a classifier and an explanation. On standard symbolic regression benchmarks, our method recovers a larger fraction of target equations than competing state-of-the-art approaches while requiring substantially less computation. Leveraging this efficiency, ECSEL achieves classification accuracy competitive with established machine learning models without sacrificing interpretability. Further, we show that ECSEL satisfies some desirable properties regarding global feature behavior, decision-boundary analysis, and local feature attributions. Experiments on benchmark datasets and two real-world case studies i.e., e-commerce and fraud detection, demonstrate that the learned equations expose dataset biases, support counterfactual reasoning, and yield actionable insights.
Problem

Research questions and friction points this paper is trying to address.

explainable classification
signomial equations
interpretability
symbolic regression
machine learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Signomial Equation Learning
Symbolic Regression
Interpretable Classification
Closed-form Expression
A
Adia C. Lumadjeng
Informatics Institute, University of Amsterdam; Department of Business Analytics, Amsterdam Business School, University of Amsterdam
Ilker Birbil
Ilker Birbil
University of Amsterdam
Data Science and Optimization
Erman Acar
Erman Acar
Institute for Logic, Language and Computation & Informatics Institute, University of Amsterdam
Neurosymbolic AILogicCausalityMultiagent Systems