🤖 AI Summary
This study addresses judgment conflicts and transparency deficits in AI risk assessment arising from divergent stakeholder perceptions. We propose a stakeholder-perception-driven, explainable risk assessment framework. Methodologically, we employ a large language model (LLM) as a risk discriminator, integrating the Risk Atlas Nexus for structured risk modeling and GloVE-based attribution explanation to generate fine-grained, perspective-sensitive risk policies; interactive visualizations further expose the roots of stakeholder disagreement. Our key contribution is the first systematic integration of multi-stakeholder perspectives into the LLM-based risk discrimination pipeline, enabling human-aligned, interpretable policy generation. Evaluation across medical AI, autonomous driving, and fraud detection demonstrates that the framework significantly enhances the explainability of risk attributions and governance transparency, while effectively identifying and diagnosing cross-group risk perception discrepancies and their structural origins.
📝 Abstract
Understanding how different stakeholders perceive risks in AI systems is essential for their responsible deployment. This paper presents a framework for stakeholder-grounded risk assessment by using LLMs, acting as judges to predict and explain risks. Using the Risk Atlas Nexus and GloVE explanation method, our framework generates stakeholder-specific, interpretable policies that shows how different stakeholders agree or disagree about the same risks. We demonstrate our method using three real-world AI use cases of medical AI, autonomous vehicles, and fraud detection domain. We further propose an interactive visualization that reveals how and why conflicts emerge across stakeholder perspectives, enhancing transparency in conflict reasoning. Our results show that stakeholder perspectives significantly influence risk perception and conflict patterns. Our work emphasizes the importance of these stakeholder-aware explanations needed to make LLM-based evaluations more transparent, interpretable, and aligned with human-centered AI governance goals.