🤖 AI Summary
The lack of a unified theoretical foundation for converting between Bayesian networks and Markov networks hinders rigorous analysis and compositional reasoning.
Method: We propose an algebraic modeling framework based on category theory: Bayesian and Markov networks are formalized as functors mapping syntax (graphical structures) to semantics (probabilistic interpretations), while moralization and triangulation are rigorously defined as pre-composition operations with functors between corresponding categories.
Contribution/Results: This work provides the first unified categorical characterization of both network types, exposing their shared structural essence. It enables inductive definitions and modular composition of transformations, thereby enhancing abstraction, formal verifiability, and scalability of conversion procedures. By grounding probabilistic graphical models in category-theoretic semantics, our framework establishes a novel algebraic foundation for their theoretical unification and formal probabilistic reasoning.
📝 Abstract
Moralisation and Triangulation are transformations allowing to switch between different ways of factoring a probability distribution into a graphical model. Moralisation allows to view a Bayesian network (a directed model) as a Markov network (an undirected model), whereas triangulation works in the opposite direction. We present a categorical framework where these transformations are modelled as functors between a category of Bayesian networks and one of Markov networks. The two kinds of network (the objects of these categories) are themselves represented as functors, from a `syntax' domain to a `semantics' codomain. Notably, moralisation and triangulation are definable inductively on such syntax, and operate as a form of functor pre-composition. This approach introduces a modular, algebraic perspective in the theory of probabilistic graphical models.