🤖 AI Summary
How can AI risks and benefits be communicated clearly to non-technical audiences to support effective regulation and democratic participation? This study introduces the Impact Assessment Card—a visual, standardized communication tool grounded in human-computer interaction (HCI) principles and user-centered design. Developed and validated through iterative focus groups and controlled online experiments, the card significantly reduces users’ comprehension time by 37% on average compared to conventional textual reports, improves accuracy in identifying risks and benefits by 28%, and enhances information integration quality—across diverse educational backgrounds. Its core contribution lies in translating complex AI governance issues into a lightweight, actionable, comparable, and participatory assessment interface. By doing so, it advances methodological scalability and practical implementation for AI transparency and inclusive governance.
📝 Abstract
Communicating the risks and benefits of AI is important for regulation and public understanding. Yet current methods such as technical reports often exclude people without technical expertise. Drawing on HCI research, we developed an Impact Assessment Card to present this information more clearly. We held three focus groups with a total of 12 participants who helped identify design requirements and create early versions of the card. We then tested a refined version in an online study with 235 participants, including AI developers, compliance experts, and members of the public selected to reflect the U.S. population by age, sex, and race. Participants used either the card or a full impact assessment report to write an email supporting or opposing a proposed AI system. The card led to faster task completion and higher-quality emails across all groups. We discuss how design choices can improve accessibility and support AI governance. Examples of cards are available at: https://social-dynamics.net/ai-risks/impact-card/.