Visibility Allocation Systems: How Algorithmic Design Shapes Online Visibility and Societal Outcomes

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Algorithmic systems suffer from low information visibility due to structural complexity, inadequate documentation, and dense feedback loops, impeding transparent understanding and regulatory oversight of their visibility-allocation decisions. To address this, we propose the Visibility Allocation System (VAS)—a unified formal framework that models recommendation, moderation, and prediction modules as analyzable dataflow graphs, enabling end-to-end decomposition and risk localization. Our method integrates formal modeling, cross-module compositional analysis, and a computable metrics suite for visibility governance. VAS is the first framework to systematically unify heterogeneous algorithmic components, supporting multi-level diagnostics and compliance assessment against regulatory standards. Evaluated on an educational recommendation system, VAS successfully identifies systemic biases, attributes causal impact across pipeline stages, and provides actionable evidence for AI policy formulation—thereby establishing a technically grounded, policy-ready foundation for algorithmic accountability and governance.

Technology Category

Application Category

📝 Abstract
Throughout application domains, we now rely extensively on algorithmic systems to engage with ever-expanding datasets of information. Despite their benefits, these systems are often complex (comprising of many intricate tools, e.g., moderation, recommender systems, prediction models), of unknown structure (due to the lack of accompanying documentation), and having hard-to-predict yet potentially severe downstream consequences (due to the extensive use, systematic enactment of existing errors, and many comprising feedback loops). As such, understanding and evaluating these systems as a whole remains a challenge for both researchers and legislators. To aid ongoing efforts, we introduce a formal framework for such visibility allocation systems (VASs) which we define as (semi-)automated systems deciding which (processed) data to present a human user with. We review typical tools comprising VASs and define the associated computational problems they solve. By doing so, VASs can be decomposed into sub-processes and illustrated via data flow diagrams. Moreover, we survey metrics for evaluating VASs throughout the pipeline, thus aiding system diagnostics. Using forecasting-based recommendations in school choice as a case study, we demonstrate how our framework can support VAS evaluation. We also discuss how our framework can support ongoing AI-legislative efforts to locate obligations, quantify systemic risks, and enable adaptive compliance.
Problem

Research questions and friction points this paper is trying to address.

Analyzing algorithmic systems' complex structure and hidden impacts
Formalizing visibility allocation systems for systematic decomposition
Developing evaluation metrics for algorithmic societal risk assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces formal framework for visibility allocation systems
Decomposes systems into sub-processes with data flow
Surveys metrics for evaluating algorithmic systems pipeline
🔎 Similar Papers
No similar papers found.
S
Stefania Ionescu
ETH Zurich
R
Robin Forsberg
University of Zürich
E
Elsa Lichtenegger
University of Zürich
S
Salima Jaoua
University of Zürich
K
Kshitijaa Jaglan
University of Zürich
F
Florian Dorfler
ETH Zurich
Aniko Hannak
Aniko Hannak
Assistant Professor, University of Zürich
Computational Social ScienceAlgorithmic FairnessAlgorithm AuditingPlatform Economy