Short-term AI literacy intervention does not reduce over-reliance on incorrect ChatGPT recommendations

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether brief AI literacy interventions can mitigate high school students’ overreliance on erroneous ChatGPT suggestions. Method: A randomized controlled trial was conducted wherein participants solved mathematical puzzles using ChatGPT responses deliberately engineered to contain a 50% error rate; the intervention group received concise instruction on large language model (LLM) mechanisms and limitations, while the control group received no instruction. Contribution/Results: Contrary to expectations, the intervention did not reduce error adoption—overall error acceptance remained high at 52.1%—and significantly increased the rate of overlooking correct suggestions. This counterintuitive finding challenges the prevailing assumption that enhanced cognitive understanding alone improves generative AI usage behavior. It provides the first empirical evidence that short-term AI literacy initiatives may inadvertently intensify, rather than alleviate, uncritical trust in AI outputs. The results carry critical implications for the design of AI education curricula and the ethics of human-AI collaboration.

Technology Category

Application Category

📝 Abstract
In this study, we examined whether a short-form AI literacy intervention could reduce the adoption of incorrect recommendations from large language models. High school seniors were randomly assigned to either a control or an intervention group, which received an educational text explaining ChatGPT's working mechanism, limitations, and proper use. Participants solved math puzzles with the help of ChatGPT's recommendations, which were incorrect in half of the cases. Results showed that students adopted incorrect suggestions 52.1% of the time, indicating widespread over-reliance. The educational intervention did not significantly reduce over-reliance. Instead, it led to an increase in ignoring ChatGPT's correct recommendations. We conclude that the usage of ChatGPT is associated with over-reliance and it is not trivial to increase AI literacy to counter over-reliance.
Problem

Research questions and friction points this paper is trying to address.

Examines if AI literacy reduces reliance on incorrect ChatGPT recommendations.
Tests short-form intervention on high school students using ChatGPT.
Finds intervention fails to reduce over-reliance on incorrect AI advice.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Short-form AI literacy intervention tested
High school students used ChatGPT for math puzzles
Intervention increased ignoring correct ChatGPT recommendations
🔎 Similar Papers
2024-07-22arXiv.orgCitations: 1