CoFrNets: Interpretable Neural Architecture Inspired by Continued Fractions

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural architectures lack intrinsic interpretability due to insufficient design principles for structural transparency. Method: This paper proposes CoFrNets—a novel interpretable neural network inspired by the mathematical structure of continued fractions. Its architecture explicitly embeds continued-fraction representations, enabling structural-level intrinsic interpretability. Training employs rational-function parameterization and gradient-based optimization, while attribution is derived in closed form. Contribution/Results: CoFrNets establishes a new universal approximation theory beyond wide/deep limits; supports analytical feature attribution and quantification of high-order feature interactions; achieves precise modeling of higher-order nonlinearities on synthetic functions; and matches or surpasses state-of-the-art interpretable models across seven cross-modal real-world datasets—approaching the accuracy of top-performing black-box models in several cases.

Technology Category

Application Category

📝 Abstract
In recent years there has been a considerable amount of research on local post hoc explanations for neural networks. However, work on building interpretable neural architectures has been relatively sparse. In this paper, we present a novel neural architecture, CoFrNet, inspired by the form of continued fractions which are known to have many attractive properties in number theory, such as fast convergence of approximations to real numbers. We show that CoFrNets can be efficiently trained as well as interpreted leveraging their particular functional form. Moreover, we prove that such architectures are universal approximators based on a proof strategy that is different than the typical strategy used to prove universal approximation results for neural networks based on infinite width (or depth), which is likely to be of independent interest. We experiment on nonlinear synthetic functions and are able to accurately model as well as estimate feature attributions and even higher order terms in some cases, which is a testament to the representational power as well as interpretability of such architectures. To further showcase the power of CoFrNets, we experiment on seven real datasets spanning tabular, text and image modalities, and show that they are either comparable or significantly better than other interpretable models and multilayer perceptrons, sometimes approaching the accuracies of state-of-the-art models.
Problem

Research questions and friction points this paper is trying to address.

Design interpretable neural networks inspired by continued fractions
Prove universal approximation capability with novel proof strategy
Validate performance on synthetic and real-world datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural architecture inspired by continued fractions
Efficient training and interpretable functional form
Universal approximators with novel proof strategy
🔎 Similar Papers
No similar papers found.