🤖 AI Summary
This study identifies a long-overlooked cognitive dimension in algorithmic fairness—systematic credibility deficits or excesses—that induce cognitive exclusion, whereby individuals are erroneously marginalized from knowledge dissemination and innovation diffusion due to misaligned credibility attribution. Existing research predominantly emphasizes ethical considerations while neglecting underlying cognitive mechanisms.
Method: We integrate cognitive justice into the algorithmic fairness framework for the first time, proposing a novel analytical pathway: “credibility bias → cognitive exclusion → innovation distortion.” We develop a cognitively sensitive extended Linear Threshold Model (LTM) that jointly incorporates dynamic individual credibility assignment and social tie strength. The model supports both open-loop and closed-loop intervention analysis.
Contribution/Results: Empirical evaluation demonstrates that credibility bias significantly distorts innovation adoption pathways and coverage; policy simulations reveal consequential fairness misjudgments. Our work introduces a formalizable, quantifiable cognitive dimension for fair algorithm design.
📝 Abstract
Algorithmic fairness is an expanding field that addresses a range of discrimination issues associated with algorithmic processes. However, most works in the literature focus on analyzing it only from an ethical perspective, focusing on moral principles and values that should be considered in the design and evaluation of algorithms, while disregarding the epistemic dimension related to knowledge transmission and validation. However, this aspect of algorithmic fairness should also be included in the debate, as it is crucial to introduce a specific type of harm: an individual may be systematically excluded from the dissemination of knowledge due to the attribution of a credibility deficit/excess. In this work, we specifically focus on characterizing and analyzing the impact of this credibility deficit or excess on the diffusion of innovations on a societal scale, a phenomenon driven by individual attitudes and social interactions, and also by the strength of mutual connections. Indeed, discrimination might shape the latter, ultimately modifying how innovations spread within the network. In this light, to incorporate, also from a formal point of view, the epistemic dimension in innovation diffusion models becomes paramount, especially if these models are intended to support fair policy design. For these reasons, we formalize the epistemic properties of a social environment, by extending the well-established Linear Threshold Model (LTM) in an epistemic direction to show the impact of epistemic biases in innovation diffusion. Focusing on the impact of epistemic bias in both open-loop and closed-loop scenarios featuring optimal fostering policies, our results shed light on the pivotal role the epistemic dimension might have in the debate of algorithmic fairness in decision-making.