🤖 AI Summary
To address the challenge of adapting large language models (LLMs) to proprietary industrial programming languages—such as ABB RAPID—in automation domains, this paper proposes a fine-tuning-free, few-shot prompting method enabling locally deployed LLMs to directly comprehend and modify RAPID programs. By eliminating reliance on large-scale annotated datasets or custom model training, the approach preserves data privacy and enhances deployment flexibility. Experimental evaluation demonstrates its effectiveness on elementary tasks including code repair and logical adaptation, substantially lowering the adoption barrier for LLMs in non-general-purpose industrial language settings. Key contributions include: (i) the first systematic investigation into LLM support for closed industrial languages like RAPID; (ii) a lightweight, secure, and plug-and-play prompting framework; and (iii) a low-overhead, highly controllable paradigm for AI-assisted programming tailored to high-sensitivity industrial environments.
📝 Abstract
How to best use Large Language Models (LLMs) for software engineering is covered in many publications in recent years. However, most of this work focuses on widely-used general purpose programming languages. The utility of LLMs for software within the industrial process automation domain, with highly-specialized languages that are typically only used in proprietary contexts, is still underexplored. Within this paper, we study enterprises can achieve on their own without investing large amounts of effort into the training of models specific to the domain-specific languages that are used. We show that few-shot prompting approaches are sufficient to solve simple problems in a language that is otherwise not well-supported by an LLM and that is possible on-premise, thereby ensuring the protection of sensitive company data.