🤖 AI Summary
Contemporary AI systems prioritize ideological neutrality, yet this approach may inadvertently engender automated bias and diminish human cognitive engagement. Method: We conducted a randomized controlled trial with 2,500 participants, deploying multiple versions of a GPT-4o assistant—programmed with left-leaning, right-leaning, or bidirectional cultural biases—to assess their impact on human decision quality, cognitive effort, and trust during information evaluation tasks. Results: Unidirectional culturally biased AI significantly improved human judgment accuracy and cognitive engagement under ideologically opposing conditions, but eroded user trust. In contrast, bidirectionally biased AI sustained performance gains while substantially narrowing the trust gap. This study provides the first empirical evidence of a nonlinear triadic relationship among bias, performance, and trust—challenging the prevailing neutrality paradigm in AI design and offering a novel pathway toward collaborative intelligent systems that augment, rather than supplant, human agency.
📝 Abstract
Current AI systems minimize risk by enforcing ideological neutrality, yet this may introduce automation bias by suppressing cognitive engagement in human decision-making. We conducted randomized trials with 2,500 participants to test whether culturally biased AI enhances human decision-making. Participants interacted with politically diverse GPT-4o variants on information evaluation tasks. Partisan AI assistants enhanced human performance, increased engagement, and reduced evaluative bias compared to non-biased counterparts, with amplified benefits when participants encountered opposing views. These gains carried a trust penalty: participants underappreciated biased AI and overcredited neutral systems. Exposing participants to two AIs whose biases flanked human perspectives closed the perception-performance gap. These findings complicate conventional wisdom about AI neutrality, suggesting that strategic integration of diverse cultural biases may foster improved and resilient human decision-making.