๐ค AI Summary
Targeting high-assurance decision-making scenarios, this work proposes the first end-to-end differentiable compact rule-based classifier, breaking away from traditional discrete rule search paradigms. Methodologically, it pioneers the integration of continuous differentiable parameterization and gradient-based optimization into rule learning, jointly optimizing rule structure (antecedents and consequents) and fuzzy partition parameters. Soft-constraint regularization explicitly controls rule count, rule length, and fuzzy granularity, thereby balancing interpretability, controllability, and scalability. Evaluated on 40 benchmark datasets, the model achieves accuracy competitive with black-box models while yielding significantly more compact rule setsโreducing the average number of rule patterns by over 50% compared to state-of-the-art interpretable classifiers.
๐ Abstract
Rule-based models play a crucial role in scenarios that require transparency and accountable decision-making. However, they primarily consist of discrete parameters and structures, which presents challenges for scalability and optimization. In this work, we introduce a new rule-based classifier trained using gradient descent, in which the user can control the maximum number and length of the rules. For numerical partitions, the user can also control the partitions used with fuzzy sets, which also helps keep the number of partitions small. We perform a series of exhaustive experiments on $40$ datasets to show how this classifier performs in terms of accuracy and rule base size. Then, we compare our results with a genetic search that fits an equivalent classifier and with other explainable and non-explainable state-of-the-art classifiers. Our results show how our method can obtain compact rule bases that use significantly fewer patterns than other rule-based methods and perform better than other explainable classifiers.