🤖 AI Summary
This study investigates how the granularity of AI-generated image labels (basic/intermediate/maximal) and content risk level (high/low) interactively influence user engagement, transparency perception, and trust in social media contexts. Method: A within-subjects experimental design manipulated label granularity and content risk as core independent variables to empirically assess their effects. Results: (1) Increasing label granularity significantly enhances transparency perception without reducing user engagement; (2) Low-risk content independently and significantly boosts both engagement and trust; (3) Label optimization and risk mitigation exhibit synergistic potential—applying intermediate-to-maximal labeling in low-risk scenarios balances transparency and user experience. This work provides the first systematic empirical evidence of the interaction between label design and content attributes, offering foundational insights and actionable guidelines for developing trustworthy, usable AI content labeling frameworks.
📝 Abstract
AI-generated images are increasingly prevalent on social media, raising concerns about trust and authenticity. This study investigates how different levels of label detail (basic, moderate, maximum) and content stakes (high vs. low) influence user engagement with and perceptions of AI-generated images through a within-subjects experimental study with 105 participants. Our findings reveal that increasing label detail enhances user perceptions of label transparency but does not affect user engagement. However, content stakes significantly impact user engagement and perceptions, with users demonstrating higher engagement and trust in low-stakes images. These results suggest that social media platforms can adopt detailed labels to improve transparency without compromising user engagement, offering insights for effective labeling strategies for AI-generated content.