🤖 AI Summary
This study investigates whether media can function as a “soft regulator” to incentivize AI developers to balance safety and profitability in the absence of formal government oversight. Method: We develop an evolutionary game-theoretic model involving self-interested developers and users, simulating how media exposure dynamically influences the emergence and sustainability of safety-oriented cooperative behavior. Contribution/Results: We demonstrate that media exerts regulatory influence by shaping public perception and reinforcing developer accountability. However, its efficacy critically depends on two factors: information credibility and public access cost. High-credibility, low-access-cost information significantly increases the probability of safety cooperation evolving; conversely, low-credibility or high-cost information suppresses cooperation. Empirical results confirm that media possesses genuine regulatory potential—but only when coupled with systemic improvements in information quality and accessibility. Thus, strategic media engagement, complemented by enhanced transparency and dissemination infrastructure, is essential to bridge critical gaps in AI safety governance.
📝 Abstract
When developers of artificial intelligence (AI) products need to decide between profit and safety for the users, they likely choose profit. Untrustworthy AI technology must come packaged with tangible negative consequences. Here, we envisage those consequences as the loss of reputation caused by media coverage of their misdeeds, disseminated to the public. We explore whether media coverage has the potential to push AI creators into the production of safe products, enabling widespread adoption of AI technology. We created artificial populations of self-interested creators and users and studied them through the lens of evolutionary game theory. Our results reveal that media is indeed able to foster cooperation between creators and users, but not always. Cooperation does not evolve if the quality of the information provided by the media is not reliable enough, or if the costs of either accessing media or ensuring safety are too high. By shaping public perception and holding developers accountable, media emerges as a powerful soft regulator -- guiding AI safety even in the absence of formal government oversight.