🤖 AI Summary
This paper addresses composite optimization problems—such as parameter-sharing models in multi-task learning—by proposing a unified framework for generalized gradient descent grounded in category theory. Methodologically, it employs Cartesian reverse derivative categories (CRDCs) as the semantic foundation, modeling optimization problems as decorated spans in a hypergraph category and dynamical systems as corresponding hypergraph structures, linked via a hypergraph functor. The key contribution is the first formal identification of generalized gradient descent as a structure-preserving functorial transformation on hypergraphs. Consequently, the derived algorithm exhibits composability, distributed implementability, and semantic consistency, enabling natural modular modeling and training. Experiments demonstrate that the framework inherently captures parameter-sharing mechanisms in multi-task learning, significantly enhancing both the abstraction level of optimization and its engineering reusability.
📝 Abstract
Cartesian reverse derivative categories (CRDCs) provide an axiomatic generalization of the reverse derivative, which allows generalized analogues of classic optimization algorithms such as gradient descent to be applied to a broad class of problems. In this paper, we show that generalized gradient descent with respect to a given CRDC induces a hypergraph functor from a hypergraph category of optimization problems to a hypergraph category of dynamical systems. The domain of this functor consists of objective functions that are 1) general in the sense that they are defined with respect to an arbitrary CRDC, and 2) open in that they are decorated spans that can be composed with other such objective functions via variable sharing. The codomain is specified analogously as a category of general and open dynamical systems for the underlying CRDC. We describe how the hypergraph functor induces a distributed optimization algorithm for arbitrary composite problems specified in the domain. To illustrate the kinds of problems our framework can model, we show that parameter sharing models in multitask learning, a prevalent machine learning paradigm, yield a composite optimization problem for a given choice of CRDC. We then apply the gradient descent functor to this composite problem and describe the resulting distributed gradient descent algorithm for training parameter sharing models.