AI Literacy Assessment Revisited: A Task-Oriented Approach Aligned with Real-world Occupations

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing AI literacy assessments overemphasize foundational technical knowledge—such as programming and mathematics—while neglecting applied competencies including model output interpretation, tool selection, and ethical reasoning, thus failing to reflect real-world workplace demands. Method: This study proposes a profession-oriented AI literacy evaluation model tailored for non-technical practitioners, departing from traditional knowledge-centric paradigms. It introduces a novel assessment framework grounded in authentic occupational contexts, integrating scenario-based multiple-choice items with practice-oriented competition tasks to construct a formative assessment instrument. Contribution/Results: Empirically validated in naval robotics training, the task-oriented approach demonstrates significantly higher validity in measuring applied AI literacy compared to conventional assessments. It yields more accurate, contextually grounded proficiency estimates and establishes a scalable, generalizable paradigm for AI literacy evaluation across diverse professional domains.

Technology Category

Application Category

📝 Abstract
As artificial intelligence (AI) systems become ubiquitous in professional contexts, there is an urgent need to equip workers, often with backgrounds outside of STEM, with the skills to use these tools effectively as well as responsibly, that is, to be AI literate. However, prevailing definitions and therefore assessments of AI literacy often emphasize foundational technical knowledge, such as programming, mathematics, and statistics, over practical knowledge such as interpreting model outputs, selecting tools, or identifying ethical concerns. This leaves a noticeable gap in assessing someone's AI literacy for real-world job use. We propose a work-task-oriented assessment model for AI literacy which is grounded in the competencies required for effective use of AI tools in professional settings. We describe the development of a novel AI literacy assessment instrument, and accompanying formative assessments, in the context of a US Navy robotics training program. The program included training in robotics and AI literacy, as well as a competition with practical tasks and a multiple choice scenario task meant to simulate use of AI in a job setting. We found that, as a measure of applied AI literacy, the competition's scenario task outperformed the tests we adopted from past research or developed ourselves. We argue that when training people for AI-related work, educators should consider evaluating them with instruments that emphasize highly contextualized practical skills rather than abstract technical knowledge, especially when preparing workers without technical backgrounds for AI-integrated roles.
Problem

Research questions and friction points this paper is trying to address.

Assessing AI literacy for real-world job applications beyond technical knowledge
Bridging the gap between abstract AI concepts and practical workplace competencies
Developing evaluation methods for non-STEM workers using AI tools effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task-oriented AI literacy assessment model
Competency-based evaluation for professional settings
Contextualized practical skills over abstract knowledge
C
Christopher Bogart
Carnegie Mellon University, Pittsburgh, PA, USA
A
A. Warrier
Carnegie Mellon University, Pittsburgh, PA, USA
A
Arav Agarwal
Carnegie Mellon University, Pittsburgh, PA, USA
R
Ross Higashi
Carnegie Mellon University, Pittsburgh, PA, USA
Yufan Zhang
Yufan Zhang
George Mason University
Computer Vision
J
Jesse Flot
Carnegie Mellon University, Pittsburgh, PA, USA
J
Jaromir Savelka
Carnegie Mellon University, Pittsburgh, PA, USA
Heather Burte
Heather Burte
Lab Director, Carnegie Mellon University
Cognitive PsychologyEducational PsychologySTEM Learning
Majd Sakr
Majd Sakr
Professor of Computer Science, Carnegie Mellon University
online educationcloud computinghuman-robot interaction