đ¤ AI Summary
Developing conversational agents for the textile circular economy is hindered by severe scarcity of domain-specific data.
Method: We propose a prompt-driven paradigm to build TextileBot, a voice-based dialogue agent that enhances large language modelsâ (LLMs) zero-shot domain reasoning via structured domain knowledge embedded directly into promptsâeliminating the need for domain fine-tuning or training data. We design an interpretable, reusable structured prompt template tailored to this niche vertical domain.
Contribution/Results: To our knowledge, this is the first empirical validation of a purely prompt-engineeringâdriven voice agent in a specialized domain. A mixed-method field evaluation with 30 participantsâcombining quantitative task performance metrics with qualitative analysis of usability and trustworthinessâdemonstrates TextileBotâs capability for multi-turn expert dialogue. Significant performance differences across three prompt variants in information-seeking tasks confirm that structured prompt design critically determines interaction quality.
đ Abstract
Developing domain-specific conversational agents (CAs) has been challenged by the need for extensive domain-focused data. Recent advancements in Large Language Models (LLMs) make them a viable option as a knowledge backbone. LLMs behaviour can be enhanced through prompting, instructing them to perform downstream tasks in a zero-shot fashion (i.e. without training). To this end, we incorporated structural knowledge into prompts and used prompted LLMs to prototyping domain-specific CAs. We demonstrate a case study in a specific domain-textile circularity - TextileBot, we present the design, development, and evaluation of the TextileBot. Specially, we conducted an in-person user study (N=30) with Free Chat and Information-Gathering tasks with TextileBots to gather insights from the interaction. We analyse the human-agent interactions, combining quantitative and qualitative methods. Our results suggest that participants engaged in multi-turn conversations, and their perceptions of the three variation agents and respective interactions varied demonstrating the effectiveness of our prompt-based LLM approach. We discuss the dynamics of these interactions and their implications for designing future voice-based CAs.