Based AI improves human decision-making but reduces trust

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Contemporary AI systems prioritize ideological neutrality, yet this approach may inadvertently engender automated bias and diminish human cognitive engagement. Method: We conducted a randomized controlled trial with 2,500 participants, deploying multiple versions of a GPT-4o assistant—programmed with left-leaning, right-leaning, or bidirectional cultural biases—to assess their impact on human decision quality, cognitive effort, and trust during information evaluation tasks. Results: Unidirectional culturally biased AI significantly improved human judgment accuracy and cognitive engagement under ideologically opposing conditions, but eroded user trust. In contrast, bidirectionally biased AI sustained performance gains while substantially narrowing the trust gap. This study provides the first empirical evidence of a nonlinear triadic relationship among bias, performance, and trust—challenging the prevailing neutrality paradigm in AI design and offering a novel pathway toward collaborative intelligent systems that augment, rather than supplant, human agency.

Technology Category

Application Category

📝 Abstract
Current AI systems minimize risk by enforcing ideological neutrality, yet this may introduce automation bias by suppressing cognitive engagement in human decision-making. We conducted randomized trials with 2,500 participants to test whether culturally biased AI enhances human decision-making. Participants interacted with politically diverse GPT-4o variants on information evaluation tasks. Partisan AI assistants enhanced human performance, increased engagement, and reduced evaluative bias compared to non-biased counterparts, with amplified benefits when participants encountered opposing views. These gains carried a trust penalty: participants underappreciated biased AI and overcredited neutral systems. Exposing participants to two AIs whose biases flanked human perspectives closed the perception-performance gap. These findings complicate conventional wisdom about AI neutrality, suggesting that strategic integration of diverse cultural biases may foster improved and resilient human decision-making.
Problem

Research questions and friction points this paper is trying to address.

AI neutrality may reduce human cognitive engagement
Culturally biased AI improves decision-making but lowers trust
Strategic bias integration enhances human performance and resilience
Innovation

Methods, ideas, or system contributions that make the work stand out.

Culturally biased AI enhances decision-making performance
Partisan AI assistants increase engagement and reduce bias
Strategic bias integration closes perception-performance gap
🔎 Similar Papers
No similar papers found.