Compiling to recurrent neurons

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Differentiable programming struggles with discrete control structures—such as conditionals and loops—due to the absence of classical derivatives, often excluding iterative logic from gradient-based learning and severely limiting neural networks’ capacity to model algorithmic tasks. This paper introduces Cajal, a typed, higher-order language grounded in linear logic and recursive types, which for the first time compiles iterative, semantically well-defined programs into *linear recurrent neurons* that are behaviorally equivalent to their source programs. Crucially, this compilation is underpinned by a constructive semantic mapping that guarantees correctness and enables seamless integration of iterative control flow into gradient optimization. Experiments on iterative image transformation tasks demonstrate that models incorporating these neurons achieve faster convergence and markedly improved data efficiency, empirically validating the effectiveness and practicality of co-modeling discrete structures and deep learning.

Technology Category

Application Category

📝 Abstract
Discrete structures are currently second-class in differentiable programming. Since functions over discrete structures lack overt derivatives, differentiable programs do not differentiate through them and limit where they can be used. For example, when programming a neural network, conditionals and iteration cannot be used everywhere; they can break the derivatives necessary for gradient-based learning to work. This limits the class of differentiable algorithms we can directly express, imposing restraints on how we build neural networks and differentiable programs more generally. However, these restraints are not fundamental. Recent work shows conditionals can be first-class, by compiling them into differentiable form as linear neurons. Similarly, this work shows iteration can be first-class -- by compiling to linear recurrent neurons. We present a minimal typed, higher-order and linear programming language with iteration called $ extsf{Cajal}scriptstyle(mathbb{multimap}, mathbb{2}, mathbb{N})$. We prove its programs compile correctly to recurrent neurons, allowing discrete algorithms to be expressed in a differentiable form compatible with gradient-based learning. With our implementation, we conduct two experiments where we link these recurrent neurons against a neural network solving an iterative image transformation task. This determines part of its function prior to learning. As a result, the network learns faster and with greater data-efficiency relative to a neural network programmed without first-class iteration. A key lesson is that recurrent neurons enable a rich interplay between learning and the discrete structures of ordinary programming.
Problem

Research questions and friction points this paper is trying to address.

Compiling discrete algorithms into differentiable recurrent neurons
Enabling gradient-based learning through iteration and conditionals
Improving neural network training efficiency with discrete structures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compiling discrete structures into recurrent neurons
Creating differentiable form for discrete algorithms
Enabling gradient-based learning with iteration
🔎 Similar Papers
No similar papers found.