🤖 AI Summary
This study addresses the longstanding challenge in interpretable classification of balancing predictive performance with structural transparency. To this end, the authors propose a novel approach based on learning positive monomial equations, which—by leveraging symbolic regression for the first time in this context—yields closed-form expressions that simultaneously achieve high classification accuracy and global interpretability. The resulting models support analytical decision boundary inspection, local feature attribution, and counterfactual reasoning. Empirical evaluations demonstrate that the method outperforms existing symbolic regression techniques on standard benchmarks while offering superior computational efficiency. Moreover, in real-world applications such as e-commerce and fraud detection, it maintains competitive accuracy while uncovering data biases and delivering actionable insights.
📝 Abstract
We introduce ECSEL, an explainable classification method that learns formal expressions in the form of signomial equations, motivated by the observation that many symbolic regression benchmarks admit compact signomial structure. ECSEL directly constructs a structural, closed-form expression that serves as both a classifier and an explanation. On standard symbolic regression benchmarks, our method recovers a larger fraction of target equations than competing state-of-the-art approaches while requiring substantially less computation. Leveraging this efficiency, ECSEL achieves classification accuracy competitive with established machine learning models without sacrificing interpretability. Further, we show that ECSEL satisfies some desirable properties regarding global feature behavior, decision-boundary analysis, and local feature attributions. Experiments on benchmark datasets and two real-world case studies i.e., e-commerce and fraud detection, demonstrate that the learned equations expose dataset biases, support counterfactual reasoning, and yield actionable insights.