🤖 AI Summary
This paper identifies three overlooked technical latent elements—heuristic models, critical assumptions, and parameter specifications—in interdisciplinary social computing research. Often lacking rigorous computational theoretical foundations, these elements implicitly encode normative design intentions, leading to accountability displacement and failures in socio-technical scrutiny. Method: Drawing on conceptual analysis, critical technical practice, and socio-technical systems theory, the study systematically defines and deconstructs these elements, identifying six interrelated risk dimensions. Contribution/Results: The paper introduces the first methodology-oriented warning framework explicitly targeting modeling-process transparency and cross-disciplinary accountability. Designed to support algorithmic governance, AI ethics, and human-AI collaboration research, the framework provides an actionable, deep socio-technical audit pathway that foregrounds epistemic responsibility in computational social science practice.
📝 Abstract
Insightful interdisciplinary collaboration is essential to the principled governance of technology. When such efforts address the interaction between computation and society, they often focus on modeling, the process by which computer scientists formally define problems in order to enable algorithmic solutions. But modeling is a multifaceted and inherently imperfect process. Especially in interdisciplinary work, it often receives uneven scrutiny because of the practical challenges of communicating complex technical details to non-experts. We argue that there is an underappreciated if loose family of obscure and opaque technical caveats, choices, and qualifiers that the social effects of computing can depend just as much on as far more heavily scrutinized modeling choices. These artifacts are often used by researchers to paper over the incomplete theoretical foundations of computing or to burden shift responsibility for the impact of normative design decisions. Further, their nuanced technical nature often complicates thorough sociotechnical scrutiny of the discretionary decisions made to manage them. We describe three specific classes of such objects: heuristic models, assumptions, and parameters. We raise six reasons these objects may be hazardous to comprehensive analysis of computing and argue they deserve deliberate consideration as researchers explain scientific work.