🤖 AI Summary
This study addresses the critical issue of uncertainty occlusion in algorithmic prediction visualizations within educational technology (EdTech), where mainstream platforms systematically obscure predictive uncertainty, undermining transparency and equity.
Method: Through cross-domain critical inquiry—including visualization discourse analysis, critical speculative design, and comparative case studies drawn from defense, climate science, and healthcare—the study identifies structural gaps in how educational prediction systems represent uncertainty.
Contribution/Results: It introduces a set of education-equity–oriented design principles and a critical practice framework for uncertainty visualization, distilling transferable uncertainty representation paradigms. Grounded in empirical analysis, the work provides actionable, evidence-based guidance for rendering educational AI transparent and accountable. Crucially, it advances algorithmic accountability beyond technical representation toward socially situated understanding—shifting responsibility from computational accuracy to pedagogical and ethical context.
📝 Abstract
AI-powered predictive systems have high margins of error. However, data visualisations of algorithmic systems in education and other social fields tend to visualise certainty, thus invisibilising the underlying approximations and uncertainties of the algorithmic systems and the social settings in which these systems operate. This paper draws on a critical speculative approach to first analyse data visualisations from predictive analytics platforms for education. It demonstrates that visualisations of uncertainty in education are rare. Second, the paper explores uncertainty visualisations in other fields (defence, climate change and healthcare). The paper concludes by reflecting on the role of data visualisations and un/certainty in shaping educational futures. It also identifies practical implications for the design of data visualisations in education.