🤖 AI Summary
Algorithmic systems suffer from low information visibility due to structural complexity, inadequate documentation, and dense feedback loops, impeding transparent understanding and regulatory oversight of their visibility-allocation decisions. To address this, we propose the Visibility Allocation System (VAS)—a unified formal framework that models recommendation, moderation, and prediction modules as analyzable dataflow graphs, enabling end-to-end decomposition and risk localization. Our method integrates formal modeling, cross-module compositional analysis, and a computable metrics suite for visibility governance. VAS is the first framework to systematically unify heterogeneous algorithmic components, supporting multi-level diagnostics and compliance assessment against regulatory standards. Evaluated on an educational recommendation system, VAS successfully identifies systemic biases, attributes causal impact across pipeline stages, and provides actionable evidence for AI policy formulation—thereby establishing a technically grounded, policy-ready foundation for algorithmic accountability and governance.
📝 Abstract
Throughout application domains, we now rely extensively on algorithmic systems to engage with ever-expanding datasets of information. Despite their benefits, these systems are often complex (comprising of many intricate tools, e.g., moderation, recommender systems, prediction models), of unknown structure (due to the lack of accompanying documentation), and having hard-to-predict yet potentially severe downstream consequences (due to the extensive use, systematic enactment of existing errors, and many comprising feedback loops). As such, understanding and evaluating these systems as a whole remains a challenge for both researchers and legislators. To aid ongoing efforts, we introduce a formal framework for such visibility allocation systems (VASs) which we define as (semi-)automated systems deciding which (processed) data to present a human user with. We review typical tools comprising VASs and define the associated computational problems they solve. By doing so, VASs can be decomposed into sub-processes and illustrated via data flow diagrams. Moreover, we survey metrics for evaluating VASs throughout the pipeline, thus aiding system diagnostics. Using forecasting-based recommendations in school choice as a case study, we demonstrate how our framework can support VAS evaluation. We also discuss how our framework can support ongoing AI-legislative efforts to locate obligations, quantify systemic risks, and enable adaptive compliance.