Beyond Content: How Author Network Centrality Drives Citation Disparities in Top AI Conferences

📅 2025-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Citation disparities in AI top-tier conferences persist despite controlling for paper quality, suggesting unobserved structural drivers. Method: Leveraging 17,942 papers from NeurIPS, ICML, and ICLR (2005–2024), we propose Harmonic Closeness Temporal Centrality with Decay (HCTCD)—a temporal, collaboration-strength-weighted centrality measure—and introduce Beta regression to model citation percentile ranks. Contribution/Results: We demonstrate that team-level exponentially weighted centrality aggregation substantially outperforms individual- or rank-based aggregation; long-term centrality exerts significantly greater influence than short-term metrics. Integrating HCTCD reduces mean squared error in citation prediction by 2.4%–4.8%. This work provides the first systematic empirical evidence that network structural bias—rather than content quality alone—dominates citation distribution in AI research, offering both novel interpretability and a quantifiable, fairness-aware tool for scholarly evaluation.

Technology Category

Application Category

📝 Abstract
While scholarly citations are pivotal for assessing academic impact, they often reflect systemic biases beyond research quality. This study examines a critical yet underexplored driver of citation disparities: authors' structural positions within scientific collaboration networks. Through a large-scale analysis of 17,942 papers from three top-tier machine learning conferences (NeurIPS, ICML, ICLR) published between 2005 and 2024, we quantify the influence of author centrality on citations. Methodologically, we advance the field by employing beta regression to model citation percentiles, which appropriately accounts for the bounded nature of citation data. We also propose a novel centrality metric, Harmonic Closeness with Temporal and Collaboration Count Decay (HCTCD), which incorporates temporal decay and collaboration intensity. Our results robustly demonstrate that long-term centrality exerts a significantly stronger effect on citation percentiles than short-term metrics, with closeness centrality and HCTCD emerging as the most potent predictors. Importantly, team-level centrality aggregation, particularly through exponentially weighted summation, explains citation variance more effectively than conventional rank-based approaches, underscoring the primacy of collective network connectivity over individual prominence. Integrating centrality features into machine learning models yields a 2.4% to 4.8% reduction in prediction error (MSE), confirming their value beyond content-based benchmarks. These findings challenge entrenched evaluation paradigms and advocate for network-aware assessment frameworks to mitigate structural inequities in scientific recognition.
Problem

Research questions and friction points this paper is trying to address.

Examining how author network centrality influences citation disparities in AI conferences
Quantifying the impact of long-term centrality on citation percentiles using novel metrics
Proposing network-aware assessment to reduce structural biases in scientific recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Beta regression models citation percentiles appropriately
Novel HCTCD centrality metric includes temporal and collaboration decay
Team-level centrality aggregation via exponential weighting improves prediction
🔎 Similar Papers
No similar papers found.