Stochastic Neural Network Symmetrisation in Markov Categories

📅 2024-06-17
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Random neural networks lack principled frameworks for structured symmetry modeling. Method: We propose the first general equivariant symmetrization paradigm for stochastic models. Within the Markov category framework, we unify deterministic and stochastic mappings; leveraging group representation theory and categorical semantics, we systematically lift H-equivariant networks to G-equivariant ones via group homomorphisms, enabling composable symmetrization. We further extend classical averaging to the stochastic setting under minimally restrictive assumptions, ensuring broad theoretical applicability. Contribution/Results: We establish the first composable, mathematically rigorous theory for symmetrizing stochastic neural networks—overcoming the longstanding limitation that equivariance theory applies only to deterministic models. Our framework significantly enhances structural expressivity and improves generalization with greater interpretability, providing a foundational advance for equivariant learning in probabilistic settings.

Technology Category

Application Category

📝 Abstract
We consider the problem of symmetrising a neural network along a group homomorphism: given a homomorphism $varphi : H o G$, we would like a procedure that converts $H$-equivariant neural networks to $G$-equivariant ones. We formulate this in terms of Markov categories, which allows us to consider neural networks whose outputs may be stochastic, but with measure-theoretic details abstracted away. We obtain a flexible and compositional framework for symmetrisation that relies on minimal assumptions about the structure of the group and the underlying neural network architecture. Our approach recovers existing canonicalisation and averaging techniques for symmetrising deterministic models, and extends to provide a novel methodology for symmetrising stochastic models also. Beyond this, our findings also demonstrate the utility of Markov categories for addressing complex problems in machine learning in a conceptually clear yet mathematically precise way.
Problem

Research questions and friction points this paper is trying to address.

Markov Processes
Symmetry
Stochastic Neural Networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Markov Categories
Neural Networks
Stochastic Outputs
🔎 Similar Papers
No similar papers found.