A Roadmap for Tamed Interactions with Large Language Models

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from hallucination, unreliability, and uncontrolled behavior, hindering their trustworthy deployment in safety-critical workflows; existing reliability-enhancement tools are fragmented and lack a systematic framework. This paper introduces LSL (LLM Scripting Language), a domain-specific scripting language that embeds formal specifications, verifiable constraints, and explainability mechanisms directly into the LLM interaction process—enabling structured output constraints, programmable behavioral control, and decoupled execution governance. LSL unifies domain-specific language (DSL) design, formal verification, and runtime checking, significantly improving output reliability, consistency, and traceability. Experiments demonstrate that LSL effectively mitigates hallucination across diverse tasks, supports safe and controllable LLM integration, and establishes a novel interaction paradigm for trustworthy AI systems.

Technology Category

Application Category

📝 Abstract
We are witnessing a bloom of AI-powered software driven by Large Language Models (LLMs). Although the applications of these LLMs are impressive and seemingly countless, their unreliability hinders adoption. In fact, the tendency of LLMs to produce faulty or hallucinated content makes them unsuitable for automating workflows and pipelines. In this regard, Software Engineering (SE) provides valuable support, offering a wide range of formal tools to specify, verify, and validate software behaviour. Such SE tools can be applied to define constraints over LLM outputs and, consequently, offer stronger guarantees on the generated content. In this paper, we argue that the development of a Domain Specific Language (DSL) for scripting interactions with LLMs using an LLM Scripting Language (LSL) may be key to improve AI-based applications. Currently, LLMs and LLM-based software still lack reliability, robustness, and trustworthiness, and the tools or frameworks to cope with these issues suffer from fragmentation. In this paper, we present our vision of LSL. With LSL, we aim to address the limitations above by exploring ways to control LLM outputs, enforce structure in interactions, and integrate these aspects with verification, validation, and explainability. Our goal is to make LLM interaction programmable and decoupled from training or implementation.
Problem

Research questions and friction points this paper is trying to address.

Addressing LLM unreliability and hallucination issues
Developing DSL to control LLM outputs and interactions
Integrating verification and validation for trustworthy LLM applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Develops DSL for scripting LLM interactions
Enforces structured constraints on LLM outputs
Integrates verification and validation mechanisms
🔎 Similar Papers
No similar papers found.