ProbLog4Fairness: A Neurosymbolic Approach to Modeling and Mitigating Bias

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing fairness definitions are often mutually incompatible and lack a principled, flexible, and interpretable framework capable of seamlessly integrating domain knowledge—such as assumptions about systemic biases—into model training. Method: We propose a neuro-symbolic approach that formalizes diverse bias hypotheses as interpretable, composable bias expression templates within a probabilistic logic program built on ProbLog, and tightly couples this symbolic representation with neural network training. Contribution/Results: Our method transcends conventional reliance on fixed bias types or singular fairness criteria, enabling synergistic causal reasoning and deep learning. It supports adaptive bias identification and mitigation on both tabular and image data. Experiments demonstrate statistically significant improvements over state-of-the-art baselines on synthetic and real-world datasets, confirming strong generalization and cross-domain applicability.

Technology Category

Application Category

📝 Abstract
Operationalizing definitions of fairness is difficult in practice, as multiple definitions can be incompatible while each being arguably desirable. Instead, it may be easier to directly describe algorithmic bias through ad-hoc assumptions specific to a particular real-world task, e.g., based on background information on systemic biases in its context. Such assumptions can, in turn, be used to mitigate this bias during training. Yet, a framework for incorporating such assumptions that is simultaneously principled, flexible, and interpretable is currently lacking. Our approach is to formalize bias assumptions as programs in ProbLog, a probabilistic logic programming language that allows for the description of probabilistic causal relationships through logic. Neurosymbolic extensions of ProbLog then allow for easy integration of these assumptions in a neural network's training process. We propose a set of templates to express different types of bias and show the versatility of our approach on synthetic tabular datasets with known biases. Using estimates of the bias distortions present, we also succeed in mitigating algorithmic bias in real-world tabular and image data. We conclude that ProbLog4Fairness outperforms baselines due to its ability to flexibly model the relevant bias assumptions, where other methods typically uphold a fixed bias type or notion of fairness.
Problem

Research questions and friction points this paper is trying to address.

Modeling algorithmic bias through probabilistic logic programming
Mitigating bias in neural networks using neurosymbolic integration
Providing flexible framework for diverse fairness definitions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses ProbLog to formalize bias assumptions
Integrates bias models into neural network training
Provides templates for diverse bias types
🔎 Similar Papers
No similar papers found.