Learners' languages

📅 2021-03-01
🏛️ ACT
📈 Citations: 15
Influential: 3
📄 PDF
🤖 AI Summary
Establishing a rigorous mathematical connection between the foundational mechanisms of deep learning—gradient descent and backpropagation—and categorical structures. Method: Modeling the learning process as a strong monoidal functor; establishing the hierarchical embedding Learn ≅ Para(Slens) ⊂ Para(Poly); interpreting learner interfaces as internal hom-types within the Poly category; and formalizing the logical semantics of gradient descent in the p-Coalg topos. Contribution/Results: Unifying the algebraic structure of backpropagation with the dynamical systems semantics of generalized Moore machines, thereby revealing intrinsic links to polynomial functors and simple lenses; providing a computable logical model and foundation for formal verification of learning processes; advancing the categorical foundations of deep learning. This work bridges abstract category theory with concrete machine learning practice, enabling principled analysis and verification of learning algorithms through compositional, type-theoretic, and coalgebraic methods.
📝 Abstract
In"Backprop as functor", the authors show that the fundamental elements of deep learning -- gradient descent and backpropagation -- can be conceptualized as a strong monoidal functor Para(Euc)$ o$Learn from the category of parameterized Euclidean spaces to that of learners, a category developed explicitly to capture parameter update and backpropagation. It was soon realized that there is an isomorphism Learn$cong$Para(Slens), where Slens is the symmetric monoidal category of simple lenses as used in functional programming. In this note, we observe that Slens is a full subcategory of Poly, the category of polynomial functors in one variable, via the functor $Amapsto Ay^A$. Using the fact that (Poly,$otimes$) is monoidal closed, we show that a map $A o B$ in Para(Slens) has a natural interpretation in terms of dynamical systems (more precisely, generalized Moore machines) whose interface is the internal-hom type $[Ay^A,By^B]$. Finally, we review the fact that the category p-Coalg of dynamical systems on any $p in$ Poly forms a topos, and consider the logical propositions that can be stated in its internal language. We give gradient descent as an example, and we conclude by discussing some directions for future work.
Problem

Research questions and friction points this paper is trying to address.

Conceptualize gradient descent and backpropagation as functors
Interpret Para(Slens) maps as dynamical systems using Poly
Explore logical propositions in p-Coalg topos for gradient descent
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conceptualizes deep learning as strong monoidal functor
Links Slens to polynomial functors via Poly
Interprets Para(Slens) maps as dynamical systems
🔎 Similar Papers
No similar papers found.