Not a Swiss Army Knife: Academics' Perceptions of Trade-Offs Around Generative Artificial Intelligence Use

📅 2024-05-02
🏛️ arXiv.org
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
This study investigates sociotechnical concerns surrounding generative AI (GenAI) adoption in higher education, focusing on risks stemming from opaque training data—including inaccuracies, biases, and harmful content—and their implications for academic trust, educational equity, and epistemic authority. Drawing on in-depth interviews with 18 faculty members and students, the research employs thematic coding and critical discourse analysis to identify four core tensions: lack of data transparency, erosion of stakeholder trust, inequitable technological access, and reconfiguration of knowledge authority. It offers the first systematic account of academics’ dialectical perceptions of GenAI’s ethical risks and pedagogical potential. Introducing the “technology-is-not-omnipotent” analytical framework, the study advocates context-sensitive governance and enhanced pedagogical readiness as empirically grounded, locally responsive strategies. Findings provide both theoretical insights and evidence-based recommendations for responsible GenAI integration and policy development in educational settings.

Technology Category

Application Category

📝 Abstract
In the rapidly evolving landscape of computing disciplines, substantial efforts are being dedicated to unraveling the sociotechnical implications of generative AI (Gen AI). While existing research has manifested in various forms, there remains a notable gap concerning the direct engagement of knowledge workers in academia with Gen AI. We interviewed 18 knowledge workers, including faculty and students, to investigate the social and technical dimensions of Gen AI from their perspective. Our participants raised concerns about the opacity of the data used to train Gen AI. This lack of transparency makes it difficult to identify and address inaccurate, biased, and potentially harmful, information generated by these models. Knowledge workers also expressed worries about Gen AI undermining trust in the relationship between instructor and student and discussed potential solutions, such as pedagogy readiness, to mitigate them. Additionally, participants recognized Gen AI's potential to democratize knowledge by accelerating the learning process and act as an accessible research assistant. However, there were also concerns about potential social and power imbalances stemming from unequal access to such technologies. Our study offers insights into the concerns and hopes of knowledge workers about the ethical use of Gen AI in educational settings and beyond, with implications for navigating this new landscape.
Problem

Research questions and friction points this paper is trying to address.

Investigating academic concerns about generative AI data opacity
Examining how generative AI affects instructor-student trust relationships
Analyzing social imbalances from unequal generative AI access
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interviewed academics on Gen AI perceptions
Identified data opacity and bias concerns
Proposed pedagogy readiness as mitigation solution
🔎 Similar Papers
No similar papers found.