Generative AI Uses and Risks for Knowledge Workers in a Science Organization

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the actual usage patterns, application scenarios, and multidimensional risks of generative AI among scientific and operational staff at U.S. national laboratories. Method: A mixed-methods approach was employed, integrating 66 surveys, 22 in-depth interviews, and analysis of real-world usage logs from the Argo platform. Contribution/Results: The study identifies, for the first time within a real scientific organization, a dual-mode generative AI adoption paradigm—copilot (human-AI collaboration) and workflow agent (autonomous task orchestration)—and empirically links adoption behaviors to three cross-cutting risk domains: data security, scholarly publishing integrity, and workforce impact. Although current usage remains low, it is steadily increasing; four high-frequency application scenarios are distilled. Based on these findings, the study proposes a balanced AI implementation framework for research organizations—one that simultaneously supports responsible governance and capability enhancement—providing empirical grounding for science institutions to develop context-sensitive, differentiated AI policies.

Technology Category

Application Category

📝 Abstract
Generative AI could enhance scientific discovery by supporting knowledge workers in science organizations. However, the real-world applications and perceived concerns of generative AI use in these organizations are uncertain. In this paper, we report on a collaborative study with a US national laboratory with employees spanning Science and Operations about their use of generative AI tools. We surveyed 66 employees, interviewed a subset (N=22), and measured early adoption of an internal generative AI interface called Argo lab-wide. We have four findings: (1) Argo usage data shows small but increasing use by Science and Operations employees; Common current and envisioned use cases for generative AI in this context conceptually fall into either a (2) copilot or (3) workflow agent modality; and (4) Concerns include sensitive data security, academic publishing, and job impacts. Based on our findings, we make recommendations for generative AI use in science and other organizations.
Problem

Research questions and friction points this paper is trying to address.

Generative AI
Scientific Institutions
Impact on Research and Security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative AI in Research Institutions
Risk Identification
Strategic Recommendations
🔎 Similar Papers
No similar papers found.
K
Kelly B. Wagman
University of Chicago
M
Matthew T. Dearing
Argonne National Lab
Marshini Chetty
Marshini Chetty
University of Chicago
Human Computer InteractionPervasive and Ubiquitous ComputingUsable Security