🤖 AI Summary
Cybersecurity expertise is scarce in forestry cyber-physical systems (CPS), and conventional risk assessment relies heavily on manual analysis of sensitive operational data, posing privacy and scalability challenges. Method: This paper proposes a locally deployed LLM-RAG–based assistive assessment framework designed for fully offline operation. Integrating retrieval-augmented generation (RAG) with human-in-the-loop interaction, the framework supports engineers in threat identification and preliminary risk scoring without transmitting sensitive data externally. Contribution/Results: Evaluated through expert interviews, interactive session experiments, and structured surveys, the system demonstrates capability in autonomously generating initial risk reports, detecting latent threats, and providing actionable verification suggestions. Domain experts endorse its utility as a decision-support tool but stress the necessity of human oversight to ensure analytical accuracy and regulatory compliance. This work establishes a privacy-preserving, deployable technical pathway for risk assessment in critical infrastructure domains.
📝 Abstract
In safety-critical software systems, cybersecurity activities become essential, with risk assessment being one of the most critical. In many software teams, cybersecurity experts are either entirely absent or represented by only a small number of specialists. As a result, the workload for these experts becomes high, and software engineers would need to conduct cybersecurity activities themselves. This creates a need for a tool to support cybersecurity experts and engineers in evaluating vulnerabilities and threats during the risk assessment process. This paper explores the potential of leveraging locally hosted large language models (LLMs) with retrieval-augmented generation to support cybersecurity risk assessment in the forestry domain while complying with data protection and privacy requirements that limit external data sharing. We performed a design science study involving 12 experts in interviews, interactive sessions, and a survey within a large-scale project. The results demonstrate that LLMs can assist cybersecurity experts by generating initial risk assessments, identifying threats, and providing redundancy checks. The results also highlight the necessity for human oversight to ensure accuracy and compliance. Despite trust concerns, experts were willing to utilize LLMs in specific evaluation and assistance roles, rather than solely relying on their generative capabilities. This study provides insights that encourage the use of LLM-based agents to support the risk assessment process of cyber-physical systems in safety-critical domains.