LLMs in Cybersecurity: Friend or Foe in the Human Decision Loop?

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) as cybersecurity decision aids may exert a double-edged effect on human judgment—enhancing accuracy while undermining independent reasoning, amplifying automation bias, and fostering decision homogenization. Method: We conducted a focus group experiment comparing user performance in security tasks with and without LLM support, measuring accuracy, behavioral resilience, and dependency dynamics; participants were stratified by cognitive resilience level. Contribution/Results: LLMs significantly improved accuracy and consistency on routine tasks but suppressed cognitive diversity. High-resilience users actively calibrated LLM outputs and mitigated bias, whereas low-resilience users exhibited heightened overreliance. This study provides the first empirical evidence that cognitive resilience is a critical individual moderator of LLM–human collaborative efficacy in cybersecurity. It underscores the necessity of resilience-centered design for human–AI collaboration frameworks in safety-critical domains.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are transforming human decision-making by acting as cognitive collaborators. Yet, this promise comes with a paradox: while LLMs can improve accuracy, they may also erode independent reasoning, promote over-reliance and homogenize decisions. In this paper, we investigate how LLMs shape human judgment in security-critical contexts. Through two exploratory focus groups (unaided and LLM-supported), we assess decision accuracy, behavioral resilience and reliance dynamics. Our findings reveal that while LLMs enhance accuracy and consistency in routine decisions, they can inadvertently reduce cognitive diversity and improve automation bias, which is especially the case among users with lower resilience. In contrast, high-resilience individuals leverage LLMs more effectively, suggesting that cognitive traits mediate AI benefit.
Problem

Research questions and friction points this paper is trying to address.

Investigating how LLMs shape human judgment in security-critical decision contexts
Assessing decision accuracy, behavioral resilience and reliance dynamics with LLMs
Examining whether LLMs reduce cognitive diversity and increase automation bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs as cognitive collaborators in cybersecurity
Focus groups assess decision accuracy and reliance
Cognitive traits mediate AI benefit effectiveness
🔎 Similar Papers
No similar papers found.