🤖 AI Summary
Conventional neural networks rely on gradient descent for indirect functional learning, making it difficult to embed non-differentiable discrete algorithms—resulting in rigid architectures, poor interpretability, and low data efficiency. Method: We propose Cajal, a typed, higher-order, linear functional programming language that formally bridges discrete algorithm design and differentiable computation. Crucially, we provide the first formal proof that discrete algorithms can be compiled into differentiable linear neurons, enabling direct, declarative programming of neural networks while preserving end-to-end gradient-based learning. Contribution/Results: Cajal establishes a novel compilation-theoretic framework unifying algorithmic specification and differentiable programming. Empirical evaluation demonstrates that neural modules generated from Cajal achieve significantly faster training convergence, improved data efficiency, and enhanced model interpretability and debuggability compared to standard baselines.
📝 Abstract
We don't program neural networks directly. Instead, we rely on an indirect style where learning algorithms, like gradient descent, determine a neural network's function by learning from data. This indirect style is often a virtue; it empowers us to solve problems that were previously impossible. But it lacks discrete structure. We can't compile most algorithms into a neural network -- even if these algorithms could help the network learn. This limitation occurs because discrete algorithms are not obviously differentiable, making them incompatible with the gradient-based learning algorithms that determine a neural network's function. To address this, we introduce $ extsf{Cajal}$: a typed, higher-order and linear programming language intended to be a minimal vehicle for exploring a direct style of programming neural networks. We prove $ extsf{Cajal}$ programs compile to linear neurons, allowing discrete algorithms to be expressed in a differentiable form compatible with gradient-based learning. With our implementation of $ extsf{Cajal}$, we conduct several experiments where we link these linear neurons against other neural networks to determine part of their function prior to learning. Linking with these neurons allows networks to learn faster, with greater data-efficiency, and in a way that's easier to debug. A key lesson is that linear programming languages provide a path towards directly programming neural networks, enabling a rich interplay between learning and the discrete structures of ordinary programming.