π€ AI Summary
In asymmetric humanβAI hybrid societies, the mechanisms for effectively promoting fairness remain unclear. This study addresses this gap by constructing a bipartite evolutionary model based on the ultimatum game, explicitly differentiating the roles of humans and AI as proposers or responders. Two novel AI strategies are introduced: Samaritan AI, which acts as a generous proposer, and Discriminatory AI, which functions as a strict responder. Through strategic modeling, selection strength analysis, and steady-state simulations, the research demonstrates that AI serving as a strict responder is significantly more effective in fostering group-level fairness than as a generous proposer. Notably, under strong selection, Discriminatory AI achieves and sustains cross-group fairness at a substantially lower critical prevalence compared to conventional strategies, highlighting its superior efficacy in stabilizing equitable outcomes.
π Abstract
Fairness in hybrid societies hinges on a simple choice: should AI be a generous host or a strict gatekeeper? Moving beyond symmetric models, we show that asymmetric social structures--like those in hiring, regulation, and negotiation--AI that guards fairness outperforms AI that gifts it. We bridge this gap with a bipartite hybrid population model of the Ultimatum Game, separating humans and AI into distinct proposer and receiver groups. We first introduce Samaritan AI agents, which act as either unconditional fair proposers or strict receivers. Our results reveal a striking asymmetry: Samaritan AI receivers drive population-wide fairness far more effectively than Samaritan AI proposers. To overcome the limitations of the Samaritan AI proposer, we design the Discriminatory AI proposer, which predicts co-players' expectations and only offers fair portions to those with high acceptance thresholds. Our results demonstrate that this Discriminatory AI outperforms both types of Samaritan AI, especially in strong selection scenarios. It not only sustains fairness across both populations but also significantly lowers the critical mass of agents required to reach an equitable steady state. By transitioning from unconditional modelling to strategic enforcement, our work provides a pivotal framework for deploying asymmetric AIs in the increasingly hybrid society.