Understanding Learner-LLM Chatbot Interactions and the Impact of Prompting Guidelines

📅 2025-04-10
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Learners’ limited prompt engineering proficiency impedes effective LLM interaction in educational settings. Method: We propose the first learner-centered, structured prompt guidance framework and extend the Von NeuMidas pragmatic annotation scheme to enable fine-grained attribution of interactive errors. A controlled experiment (N = 642 authentic dialogues) compares three prompt guidance strategies. Contribution/Results: The task-oriented, self-constructed guidance framework significantly improves prompt precision (+42%) and strategy adherence (+57%), while enhancing AI response relevance (+31%) and practicality (+28%). This study provides the first empirical causal evidence linking prompt instruction to learners’ strategy adoption and output quality. It yields a transferable pedagogical intervention paradigm and assessment toolkit for cultivating AI literacy.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have transformed human-computer interaction by enabling natural language-based communication with AI-powered chatbots. These models are designed to be intuitive and user-friendly, allowing users to articulate requests with minimal effort. However, despite their accessibility, studies reveal that users often struggle with effective prompting, resulting in inefficient responses. Existing research has highlighted both the limitations of LLMs in interpreting vague or poorly structured prompts and the difficulties users face in crafting precise queries. This study investigates learner-AI interactions through an educational experiment in which participants receive structured guidance on effective prompting. We introduce and compare three types of prompting guidelines: a task-specific framework developed through a structured methodology and two baseline approaches. To assess user behavior and prompting efficacy, we analyze a dataset of 642 interactions from 107 users. Using Von NeuMidas, an extended pragmatic annotation schema for LLM interaction analysis, we categorize common prompting errors and identify recurring behavioral patterns. We then evaluate the impact of different guidelines by examining changes in user behavior, adherence to prompting strategies, and the overall quality of AI-generated responses. Our findings provide a deeper understanding of how users engage with LLMs and the role of structured prompting guidance in enhancing AI-assisted communication. By comparing different instructional frameworks, we offer insights into more effective approaches for improving user competency in AI interactions, with implications for AI literacy, chatbot usability, and the design of more responsive AI systems.
Problem

Research questions and friction points this paper is trying to address.

Investigates learner-AI interactions and prompting efficacy
Compares structured prompting guidelines for better AI responses
Analyzes user behavior to improve AI communication effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured prompting guidelines for effective queries
Von NeuMidas schema for interaction analysis
Comparative evaluation of prompting frameworks
🔎 Similar Papers
No similar papers found.
Cansu Koyuturk
Cansu Koyuturk
University of Milano-Bicocca
PsychologyHuman Computer Interaction
Emily Theophilou
Emily Theophilou
Universitat Pompeu Fabra
Educational TechologiesUser BehaviourHuman Computer Interaction
Sabrina Patania
Sabrina Patania
Postdoc Researcher, University of Milano-Bicocca
Bayesian modellingActive InferenceLanguage ModelsPerceptual ComputingCognitive Models
Gregor Donabauer
Gregor Donabauer
Information Science, University of Regensburg
Natural Language ProcessingMachine LearningInformation Retrieval
A
Andrea Martinenghi
UniversitĂ  degli Studi di Milano Bicocca, Milan, Italy
C
Chiara Antico
UniversitĂ  degli Studi di Milano Bicocca, Milan, Italy
Alessia Telari
Alessia Telari
PhD Student, University of Milano-Bicocca
Social PsychologySocial ExclusionSocial ConnectionGhostingArtificial Intelligence
Alessia Testa
Alessia Testa
UniversitĂ  degli Studi di Milano Bicocca, Milan, Italy
S
Sathya Bursic
UniversitĂ  degli Studi di Milano Bicocca, Milan, Italy
Franca Garzotto
Franca Garzotto
University of Milano-Bicocca and Politecnico di Milano
Conversational AICross-realityInteractive Smart SpacesLanguage & cognitive disorders
D
D. HernĂĄndez-Leo
Universitat Pompeu Fabra, Barcelona, Spain
Udo Kruschwitz
Udo Kruschwitz
Professor, University of Regensburg
Information RetrievalNatural Language EngineeringNatural Language Processing
Davide Taibi
Davide Taibi
Full Professor, University of Oulu (M3S Cloud)
Software ArchitectureCloud ContinuumMicroservicesServerlessEmpirical Software Engineering
S
Simona Amenta
UniversitĂ  degli Studi di Milano Bicocca, Milan, Italy
Martin Ruskov
Martin Ruskov
UniversitĂ  degli studi di Milano
CrowdsourcingSerious GamesTechnology-Enhanced Learning
Dimitri Ognibene
Dimitri Ognibene
University of Milano Bicocca
artificial intelligenceroboticsmachine learningcognitive scienceneuroscience